Combined UCS Blogs

Vegetable Production in the US: Lots of Potatoes, More Kale, and Other Trends

UCS Blog - The Equation (text only) -

Photo: iStock.com//Alija

Vegetables—they’ve got me working overtime lately. That’s because my preschool-age daughter recently seems less than excited about these healthy foods. She’ll likely outgrow her (very common) picky-eater phase and enjoy vegetables. I hope.

And what’s not to love about vegetables? Copious research suggests that eating them helps prevent a variety of diseases, and other research (some of it mine) shows that vegetables and healthy plant foods are less carbon-intensive to produce than other foods.  But other studies confirm that Americans aren’t eating enough vegetables or fruits.

So naturally, when I heard that new findings from USDA’s 2017 Census of Agriculture were out, the first table I looked at was on vegetable production. (For a high-level overview of other trends evident in the census, check out my colleague Dr. Marcia DeLonge’s recent blog post.) So here are some key findings from my quick scan of data on vegetable production in the Census.

Total 2017 vegetable production was about the same as 2012

The new Agriculture Census reports that there were roughly 74,000 farms growing vegetables in the US, which represents 4 percent of all US farms in 2017. Between 2012 and 2017 the number of farms increased by 3 percent. Across vegetable farms there were 4.4 million acres in production (visualize the land area of Connecticut and Rhode Island combined), representing just 0.4 percent of all agricultural acreage in the US. Total production acreage of vegetables decreased by about 3 percent since 2012 (that’s 126,000 acres lost).

Potatoes dominate vegetable production

So which vegetable dominates vegetable production? Surprise, it’s potatoes! Potatoes account for a little over one-quarter of all US vegetable production acres (1.1 million) and 22% of all vegetable farms (or 16,554 farms). Not surprisingly, they also happen to be the most consumed vegetable by U.S. consumers.  Potato acreage declined by 3% over the five year period, while the number of farms also decreased by 21%.

Sweet corn came in second in terms of total acreage in 2017 and was grown on almost 500,000 acres, followed by lettuce (including head, leaf, and romaine), tomatoes, snap beans, sweet potatoes, and non-green onions. The graph below shows the number of farms and acres in production in 2017 for these top acreage vegetable crops. (Also, now is a good time to tell you that I like charts and graphs, so expect more of these in my blogs.) Compared to 2012, the number of farms and acreage in production declined for sweet corn, snap beans, and tomatoes, but increased for lettuce and onions.

The top five crops in terms of farm numbers in 2017 were: tomatoes (28,673 farms), squash (22,704), sweet corn (20,784), summer squash (18,269), and snap beans (18,055).

Production of certain nutrient-dense vegetables has increased since 2012

There were some changes in vegetable acreage and farm numbers relative to the 2012 Census of Agriculture that I found notable and relevant from a healthy eating and nutrition perspective. First, sweet potato (a relative of the morning glory, packed with vitamin A which supports vision and some evidence suggests it may be protective against some cancers) acres increased by 38% (or by 47,000 acres, roughly the size of Madison, Wisconsin) relative to 2012. Spinach, a dark green vegetable which the federal government suggests we ought to consume more of, acreage also rose from roughly 46,000 acres in 2012 to 70,000 acres in 2017, a 51% increase. Kale acreage, even though small in absolute terms, rose by 145% over the same time period. There were also substantial increases in acreage and production of ginseng, fresh cut herbs, mustard greens, and okra. Both acreage and number of farms producing Brussels sprouts (my family’s favorite, packed with vitamin C, a powerful antioxidant which helps support immune function) went up over the same time period, too. All of this suggests growing demand for these products.

Some vegetable crops lost acreage compared to 2012, most notably (because of how substantial a decline they experienced) green peas, chili peppers, some peas (such as black-eyed and crowder, the latter of which I find delicious), lima beans, and chicory. Interestingly, the number of farms producing these crops increased since 2012, suggesting that there is some restructuring of production happening for them. Why that’s happening is open to further inquiry.

This isn’t a full analysis of the vegetable production industry—I could spend many more hours looking at these data and examining trends over time and across crops (and I would enjoy that greatly). But these are some key findings that I found interesting on my first peek at the 2017 Census of Agriculture data.

So why should we care about US vegetable production?

First, trends in domestic production tell us something about consumer demand. U.S. consumer demand for vegetables is met with both domestic and imported vegetables. Recent data indicate we import about 30% of our fresh vegetables and by my calculations using USDA data, we import about 37% of all vegetables. From a total value perspective, this means that a substantial amount of what we consume is grown here. Consequently, changes in domestic supply tell us a little something about changes in demand (although it’s not the entire picture).

However, we are importing more and more vegetables, according to an October 2018 report from the United States Department of Agriculture Economic Research Service. Higher levels of vegetable imports to the U.S. have upsides and downsides. In a recent interview, Bradley Rickard, Associate Professor at Cornell University, notes that more vegetable imports give “consumers access to more” vegetables and “the same things at lower prices”. That could be especially good news for consumers who may not be able to consistently afford vegetables if that is a barrier they face (although some research from USDA Economic Research Service indicates that vegetables can be less expensive compared to other foods on a per portion basis).

On the other hand, due to increased importation, domestic vegetable producers have to compete with producers in other countries that may have lower production costs. Competition such as this could hurt U.S. vegetable farmers.

While the US is a net importer of vegetables (i.e. we import more than we export), we do export some vegetables we produce and so international markets are important revenue sources for U.S. vegetable farmers.

Second, tracking vegetable production geographically is important from a climate standpoint. US vegetable production is concentrated in certain parts of the U.S. (see the map below, from the US Agriculture Census website) which means that extreme weather events, which are expected to become more frequent due to climate change, could potentially hurt the concentrated areas where vegetable production is clustered. In turn, this may have impacts on both vegetable consumers and producers.

Third, vegetable production is relatively labor intensive (measured as labor costs as a share of total gross cash income) compared to the production of other types of crops in the US. Recent reports show that some farmers—especially those in California who grow labor-intensive crops such as fruits, tree nuts, and vegetables—are increasingly experiencing farm labor shortages, which may impact the prices consumers pay for these products. As a result, trends in vegetable production are related to labor and immigration policy discussions.

Clearly, there is a lot at stake with U.S. vegetable production, for consumers and producers alike. While total acreage and number of farms growing vegetables hasn’t changed all that much since 2012, there have been some significant changes in the levels of production of specific vegetable crops. These specific changes warrant an additional inquiry into why they occurred, which may involve more digging into the Census or other data sources. This is what’s great about the Census of Agriculture. It tells us a great deal about American agriculture, while also stimulating additional questions that need to be answered.

Map: USDA/NASS

Self-Scheduling: How Inflexible Coal is Breaking Energy Markets

UCS Blog - The Equation (text only) -

Over the past year, I have looked at the hourly operations of over one-third of the coal fleet in the US and have come to a startling conclusion: Each and every one of the coal units I have investigated have been uneconomic for at least one month. That is, the costs to operate them in a given month exceeded the revenues they earned in the energy market that same month.

Financially, most (if not all) of these coal plants would have been better off turning off.

By operating during periods of time when operating costs exceed revenues, coal-fired power plant owners are costing consumers a $1 billion a year in inflated utility bills.

Here is a simple way of looking at it: The fuel and variable costs associated with producing electricity (production costs) stay relatively flat over the course of a year. The market price, on the other hand, fluctuates. Most coal plants turn on and stay on for as long as they can. They might not run at full capacity all 8,760 hours of the year, but they will run at some minimum operating level, rather than incurring the expenses of turning off.  However, the losses incurred by staying on often far exceeds the expenses of turning off.

In this illustrative example, a power plant with $26/MWh productions costs that stays on all year it will lose money in the spring and fall. Maybe it can make up for those losses in the summer, but would it would be better off if its owners turned it off.

The competitive electricity markets in the United States weren’t designed to support this inefficient practice. However, in practice, utility companies have found a way to keep uneconomic coal plants operating by exploiting a market rule known as “self-scheduling.”

A better way

A coal plant might be able to recover its costs in the summertime but ends up taking a loss in the spring and fall. While the power plant might look solvent at the end of the year, power plant owners (and utility customers) would be better off financially if the power plant shut down seasonally.

Some utilities have chosen to shut down coal plants seasonally and only operate a few months of the year. SWEPCO and Cleco, two utilities that own and operate the Dolet Hill coal plant in Louisiana, found that they will save customers tens of millions of dollars by only operating the plant in the summer. They aren’t the only utility to figure this out. To read more about seasonal operations, feel free to read my blog on the subject, here.

Consumer impacts

This practice is costing consumers an estimated $1 billion dollars a year.

That research shows that not all owners of coal plants engage in this particular inefficient practice. Rather, it is disproportionately vertically integrated utilities, utilities that own generation and directly serve retail customers that have no choice or alternative suppliers. Those types of utilities can lose money in the competitive market and then recover those losses on the backs of captive retail customers, including those folks who are most economically vulnerable to higher energy costs.

Far-reaching implications

A billion-dollar-behind-the-scenes-bailout is bad enough. But, if you think about the far-reaching implications of this practice, it’s clear it undermines the underpinnings of competitive energy markets and results in wind, solar, and energy efficiency all being undervalued by utilities.

Wholesale market price

Where wholesale markets have been established, the market price for electricity is determined by a clearing price that all generators get paid (known as a locational marginal price, or LMP). The price is set by the most expensive unit that clears the energy auction at a given time. This has given rise to the general belief that the market price represents the most expensive unit on the system. However, it rarely works out so cleanly in real life.

Markets are supposed to make sure that power plants are operated in “merit-order” from lowest cost to operate, to most expensive to operate. Self-scheduling allows expensive coal plants to cut in line, pushing out less expensive power plants.

Properly functioning markets are predicated on properly functioning price signals. If the market prices are distorted, then what happens to the market?

Self-scheduling allows expensive coal plants to cut in line, pushing out less expensive power plants. High-cost coal plants (in yellow) push out lower-cost resources and deprive all resources that do operate out of some amount of revenue.

Deprives competitive generators of revenue

Self-scheduling artificially drives the market price down. This deprives competitive generators of revenue which reduces the incentive for more competition. Market prices have been low for several years now, putting a strain on competitive generators that operate coal, nuclear, and even gas. The economic woes of these generators could be linked in part to the self-serving practices of their monopoly brethren.

As consumers, we generally think of low wholesale prices as being good because those low prices should then flow to us. One reason why this practice is so nefarious is that the high costs of the coal plants are eventually passed along to consumers, depriving consumers of the benefits of low wholesale prices.

Stopping this practice might result in slightly higher wholesale prices of electricity in the short run but it would reduce the overall costs of running the system, and those overall savings should flow through to customers.

We also take a hit in the long run. The current costs to operate the system are supposed to be reflected in the wholesale prices, but they aren’t. Consequently, a distorted signal is being sent to developers that ought to build lower cost resources (including wind and solar) to enter the market.

Transmission planning

Artificially low market prices don’t just impact generation investment decisions; it also impacts transmission development. Planners and developers look at geographic price differentials to identify constraints that would make building a new transmission line economically rational.

A regional transmission operator like MISO has historically been restricted in the north/south direction with a notable constraint at the north/south divide. This is a physical constraint.

Some utilities that serve the southern states, most notably Entergy, have pointed to market prices as justification for not expanding north-south transmission. Many of the MISO south states have been flooded with self-scheduled coal, thereby pushing more affordable generators off the bid stack. Coal plants with productions costs of $30/MWh or even $40/MWh have been operating year-round, but the market-clearing price is rarely that high. The distorted market price has created a perception that there might not be a need for new transmission. In reality, new transmission would allow MISO south states to gain better access to the low-cost wind that is available in the rest of MISO.

Undermines how we value renewables

In many parts of the country, the calculation to value renewable energy and/or energy efficiency is through an esoteric process known as an “avoided cost study.” Avoided costs studies look at the costs avoided by investing in clean energy; these avoided costs are the benefits of clean energy. These benefits include avoided energy, avoided capacity, avoided emissions, and many more.

Avoided costs are used to calculate small scale renewables through a process set out in a law known as PURPA. You can read UCS’s primer on PURPA, here.

As noted by UCS, PURPA has been an incredibly effective measure in promoting renewable energy and one of the largest drivers for renewables in the US, along-side renewable portfolio standards, and renewable energy tax credits.

PURPA has done a lot to drive renewable energy.

In the case of PURPA, self-scheduling coal reduces market prices, which are often used to determine avoided energy costs. The avoided costs feed into a “PURPA rate”, the rate which small scale renewables are paid for producing electricity.

As a result of self-scheduling, many wind and solar facilities are being underpaid.

Similarly, energy efficiency and net metering are often evaluated using energy market prices as a proxy for the “value” those resources provide. When that happens, utilities can press pause on energy efficiency programs or even nix net metering.

Whether it is for PURPA, rooftop solar, or efficiency, many monopoly utilities ask regulators to approve avoided energy costs that are based on the market clearing prices and then turn around and run power plants whose costs are far above those market clearing prices.

How’s that for working the system in your favor?

Would eliminating self-scheduling fix the problem?

No. Self-scheduling is simply a loophole that many utilities are currently using to fleece customers. Close that one, they very well might find a new one. The trick to solving this problem is to address the problem directly and intentionally.

One of the most direct ways I can think of to address this problem is for state utility commissioners to disallow the costs associated with the above-market costs associated with running their fleet. But there are plenty of other ways.

A recent report that outlined ways to improve energy markets, that also noted the problems with self-scheduling. The report’s authors suggested that more transparent data and reporting on the practice might help serve as a disinfectant.

Maybe there are other good ideas out there. Maybe you have one? If so, share your thoughts with me on Twitter, where you can find me talking more about this and other esoteric energy issues.

The Future of Transportation Is Electric

UCS Blog - The Equation (text only) -

Photo: Kārlis Dambrāns/Wikimedia Commons

It’s clearer every day: the future of transportation is electric. We should be cheering this transition—and encouraging it, because along with the benefits for drivers, electrifying transportation is going to be a critical piece of fighting climate change.

Unfortunately, for many observers, skepticism about electric vehicles (EVs) has become something like an article of faith. Mired in an obsolete set of facts, electric-vehicle naysayers are making the same arguments they’ve made for years even as technology speeds forward.

Take columnist George Will, who launched a broadside against electric vehicles last week. In casting doubt on the viability of EVs, Will is revealing that he hasn’t updated his understanding of the technology or the market in a decade. His argument relies upon outdated, misleading and just-plain-wrong evidence, undermining his thesis completely.

Here’s the truth. Electric vehicles are considerably cleaner than gasoline-powered cars, and this advantage is only increasing with time. Increasingly, coal-fired power generation is declining, and the share of our electricity produced by renewables is increasing. Indeed, Will inadvertently makes this point in his article. He points out that 27 percent of our electricity comes from coal power plants but leaves out entirely the fact that a decade earlier, coal was the largest source of electricity at almost half (48 percent) of all generation. We’re on the right path.

Coal-fired electricity generation has fallen significantly over the last decade, as natural gas and renewable electricity generation has increased. Replacing coal power plants with renewable sources of electricity will make electric vehicles even cleaner. Nuclear and hydroelectric power generation is not shown as they have remained largely unchanged at 20 and 7 percent of generation respectively. Source: U.S. Energy Information Administration

This move to cleaner electricity means switching from gasoline to electricity to power our cars and trucks will lower global warming emissions. Our most recent analysis (based on 2016 electricity generation statistics) shows that the average EV driven in the US produces global warming emissions equal to an 80 mpg gasoline car. And that number is even better in parts of the US, like California and the Northeastern states, where coal is lower and renewables higher in the mix.

In addition to being the biggest source of global warming emissions in the US, transportation is also a major source of air pollution that harms public health. Reducing the amount of pollution from tailpipes will have real benefits for people living in densely-populated cities or along major highways.

Electric vehicles are cheaper to operate and maintain than traditional gasoline vehicles. While the price of oil is volatile, the cost of electricity is low and stable, and in most cities driving electric can save a household hundreds of dollars every year. And as the market grows, the price of electric vehicles has fallen dramatically, with 80 percent of electric vehicles sold in 2018 having a base suggested retail price under $50,000.

An electric future is not just the hope of EV owners and engineers. Increasingly, both automakers and governments around the world are looking to an electric future as a way to cut oil use, reduce the risks of climate change and build a cleaner, more sustainable future. Major automakers including Volkswagen, General Motors, and Toyota have all explicitly stated their belief that the future is electric.

We’re headed toward an electric future—but it’s in our interest to make sure it happens fast, because the urgency of the climate crisis demands it, and because we can’t afford to get left behind as the world makes the shift to innovative new technologies. That’s why it still makes sense for the federal government to encourage electric vehicle adoption. We need to build a strong electric market and keep the US competitive in a carbon-conscious world.  Companies will prioritize research, development, and manufacturing where policies encourage electric vehicles. Pulling back on electric car incentives too early could harm both US manufacturing and drivers as these global car companies prioritize progress outside the US.

In insisting these incentives are unnecessary, Will uses outdated and misleading data. For example, he writes “after a decade of production, moral exhortations and subsidies, electric cars are a fraction of 1 percent of all vehicle sales.”  The truth is far different: In 2018 (7 years, not 10, after the debut of the Nissan LEAF and Chevy Volt), electric vehicles were 2 percent of new car sales in the US and 8 percent in market leader California. Sales are picking up and driving economies of scale in the EV industry, but the next few years are critical to have EV reach purchase price parity with conventional vehicles.

Mr. Will also picks up a favorite argument of the current administration: US cars and trucks are only a slice of global emissions, so why bother moving to a cleaner technology? Yes, our cars and trucks are not the only source of emissions, but they are a growing source in the US. And to avoid the worst impacts of climate change we need to greatly reduce emissions from all sectors, transportation included. The US is already seeing greatly increased costs from disasters linked to climate change, and the federal government warns that future costs of global warming to the US alone could be in hundreds of billions of dollars per year from extreme heat deaths, labor productivity losses, and coastal flooding damage. Implementing policies like extending the federal EV tax credit now to reduce emissions from transportation makes sense.

Finally, George Will also misleads with his discussion of the average income of those that benefit from the federal electric vehicle tax credit. The analysis he points only looks at the early electric car purchases (2014 and earlier) and crucially ignores leases of electric cars. Because many lower-priced electric cars were leased (and some available only for lease), the income data presented is skewed towards higher-income purchasers.

Will’s refusal to look at the latest evidence undermines his case against electric vehicle incentives. The real world has moved past his outdated arguments—and to refuse to update his understanding means he’s being dishonest with his readers.

Photo: Kārlis Dambrāns/Wikimedia Commons

From Profound Conflicts of Interest to a Blind Eye for Harassment, Barry Myers Is the Wrong Choice for NOAA

UCS Blog - The Equation (text only) -

Photo: C-SPAN

When the founder and CEO of AccuWeather was nominated by President Trump to lead the National Oceanic and Atmospheric Administration (NOAA), it immediately raised serious concerns from me and many others. After all, Myers is not a scientist but will be leading a major science agency. His business, AccuWeather, is essentially built around the re-processing, re-packaging, and marketing of the weather data developed and routinely produced at public expense by NOAA. Therefore, much of the work of NOAA in scientific research, data collection, and forecasting of weather, climate, and severe storms directly impacts AccuWeather, which relies on those agency efforts for their business. That is the very definition of a conflict of interest for the putative NOAA administrator.

I worked at NOAA for 10 years as a scientist and a senior manager. It is a great agency that does outstanding science and provides vital services for the nation—weather forecasts, severe storm warnings, tsunami warnings, oceanography, climate science, charting, coastal management and marine resource management including fisheries, marine mammals, and endangered species and habitat. NOAA’s work on behalf of the public is deeply science-based.  Many companies utilize NOAA data (not just weather data), which makes the agency’s work critical to the nation’s economy as well as public health and safety.

Myers was first nominated way back in October 2017 and it has apparently taken nearly two years for Myers to come up with a way to try to address his conflicts of interest. His solution? Sell his shares in the family company to other family members at a reduced price. With a provision to buy them back when he leaves government. In other words, Myers and the White House believe that he can manage a large federal agency that is the basis of his family business by pretending that he doesn’t care about what his brothers, wife, and other relatives earn. And then at the end of his tenure at NOAA he can buy back the shares, again at a reduced price, and share in the earnings. Somehow in his view that’s not a conflict of interest? Right.

I for one am NOT comfortable that Myers will make decisions as the head of NOAA that will solely benefit the public and not his business and his family.  How about you?

As if that weren’t enough to halt all further consideration of Mr. Myers to lead NOAA, it gets worse. Recent reporting has revealed that even before the nomination went to the Senate back in 2017, the Department of Labor opened an investigation into allegations of “widespread sexual harassment” at Accuweather.  And that Myer’s company was aware of the harassment while he was CEO, and took no action for a long period of time. The report of that investigation was available more than a year ago to the administration, even though it was only revealed by the press to the public this month. Still, the White House re-nominated Myers in January of this year.

So, let’s review. Barry Myers is not a scientist. He would have deep and unresolved conflicts of interest if he were to lead NOAA. He is touted as an experienced manager as his primary qualification, but as a manager he led a company that fostered a culture of sexual harassment and workplace hostility.

Myers’ nomination is now approaching a vote on the Senate floor to confirm him for the position. NOAA and the nation don’t deserve an unqualified and conflicted nominee that turns a blind eye to sexual harassment. Under no circumstances should he be confirmed. Tell your senators: Myers? NO for NOAA!

 

Photo: C-SPAN

How Can we Get More Electric Trucks on the Road?

UCS Blog - The Equation (text only) -

Tesla semi truck.

California is considering a policy to drive sales of electric trucks like it has done for sales of electric cars.

Electric cars in California

You may know that California has the largest share of electric cars in the United States. But it’s surprisingly large.

Despite having 11 percent of the country’s vehicles and 12 percent of the country’s population, California has roughly 50 percent of the 1 million electric cars sold in the United States (value includes plug-in hybrids).

In 2018, full electric (95,000) and plug-in hybrid (63,000) electric cars represented 8 percent of all passenger vehicle sales in California (car, SUV, light pickup truck). These impressive numbers were driven largely by sales of the Tesla Model 3, which had its first full year of sales, totaling over 50,000 in the state. Sales of the Tesla Model S, Tesla Model X, and Chevy Bolt totaled 10,000 each.

What makes California a leader in electric cars? A main reason is a policy requiring car manufacturers to sell electric vehicles in the state.

California is considering a similar policy for trucks

Trucks1 and buses make up just 7 percent of vehicles on the road in California, but 20 percent of global warming emissions and 40 percent of smog-forming nitrogen oxide (NOx) emissions from the transportation sector, the largest sector for both types of emissions in California.

The California Air Resources Board (CARB) recently released the latest iteration of a policy concept that would do for trucks what it has done for cars: set zero-emission sales targets. If set at the right level, such targets could transform the truck sector from one fueled by diesel to one powered by electricity and hydrogen.

The standard has undergone two and a half years of public workshops and information gathering. It will undergo another year of public input before it is voted on.

Here’s where things stand

The sales standard proposed by CARB would result in approximately 5 percent of trucks (84,000) operating in California as zero-emission vehicles by 2030.

Viewed from the limited number of electric trucks on the road in California today (less than a thousand), 84,000 zero-emission trucks might sound like a lot. But viewed in terms of the entire 1.5 million trucks operating in the state, 95 percent would still be powered by a combustion engine in 2030.

The table below summarizes the sales standard proposed by CARB. It sets different standards for different categories of trucks, Class 2b-3; Class 4-8 vocational trucks; and Class 7-8 tractor (semi) trucks.

Table showing proposed sales standards (percentages) and estimated sales of zero-emission trucks in California.

Table showing total truck sales estimated from the proposed sales standards for zero-emission trucks in California.

Numbers are based on today’s truck sales (100,000 trucks per year2) and today’s truck population (1.5 million trucks across all categories) in California. These numbers also assume no trading of truck credits with different values across the Class 2b-3, Class 4-8 vocational, and Class 7-8 tractor categories which CARB has proposed allowing.

How has the policy changed over the last two years?

CARB’s original proposal, released two years ago, started at a 2.5 percent sales standard in 2023 and increased to 15 percent in 2030. The most recent proposal starts a year later and works out to be 3 percent of total sales in 2024, increasing to 25 percent of total sales in 2030.

The original draft included Class 2b pickup trucks but excluded Class 8 trucks. The most recent draft flips that and includes Class 8 trucks but excludes pickups until 2027. Plug-in hybrid trucks (e.g., have a battery with ~20 miles in range combined with a combustion engine) would be counted as one-third of a full electric truck.

Using the most recent sales numbers, the original proposal would have resulted in 72,000 zero-emission trucks by 2030, compared to 84,000 trucks with the new proposal. This increase, small in the context of the 1.5 million trucks in California, does not match the advances in truck technology and purchases that we’ve seen in the last two years, or the $579 million approved for investments in electric truck and bus charging infrastructure.

Just last week, electric utilities in California, Oregon, and Washington  announced they will study how to provide charging infrastructure for trucks along I-5.  We’ve come a long way since electric cars first hit the market in late 2010; even interstate electric truck travel is now considered within reach.

A more ambition standard is needed

Improving local air quality and reducing California’s contribution to global warming will require more than 5 percent of trucks to be zero-emission by 2030. So, the overall sales targets need to be higher.

For a sense of scale, 225,000 zero-emission trucks would be just 15 percent of trucks on the road today. Analysis by CARB indicates that 100,000 cleaner trucks are needed in the Los Angeles area alone to meet 2023 air quality standards. And we can’t get to net-zero carbon emissions by 2045, a goal set by Governor Brown last year, without significant deployment of electric trucks.

In the Class 2b-3 category, there is room for strengthening the standard (currently tops out at 15 percent of sales in 2030), especially if pickup trucks have a delayed timeline as drafted. Some of the vehicles most suited for electrification today, such as small delivery vans, small box trucks, and shuttle buses, are in the Class 2b-3 category.

The sales standard should also start in 2024 for Class 7 and 8 tractor trucks, rather than being delayed until 2027. Electrification of these trucks is particularly important as they travel greater distances and have lower fuel efficiencies than other types of trucks. Several battery and fuel cell electric tractor trucks are planned, if not in demonstration already.

The benefits of moving faster on truck electrification include reductions in global warming emissions and improvements in air quality. And recent UCS analysis shows that reducing emissions from vehicles is critical for addressing the inequitable exposure to air pollution from cars and trucks experienced by low income and communities of color in California.

Detailed analysis by CARB also indicates significant financial benefits are possible with truck electrification. In all three of the truck applications examined by CARB, it was estimated to be comparable if not cheaper to own and operate a battery electric truck than a diesel truck in 2024, when the proposed standard would take effect. In some applications, battery electric trucks are estimated to be cheaper today, without including the significant purchase incentives currently offered by the state.

From left to right: UPS electric delivery truck, BYD Class 6 electric box truck, Toyota Class 8 hydrogen fuel cell semi truck.

The sales standards will be coupled with purchase standards

CARB has indicated an intent to develop purchase standards for fleets that would complement the sales standards for manufacturers. The purchase standards would also take effect in 2024.

The details of these standards – likely different for various end-uses of trucks – have yet to be determined, but would set targets for fleets to begin incorporating electric truck models into their operations. To help inform their development of truck purchase standards, CARB plans to collect data (through regulatory action) from fleets operating in the state.

Purchase standards aren’t without precedent. Last December, California set a landmark purchase standard that will ensure every transit bus sold in the state will be a zero-emission vehicle by 2029. This was the first policy in the United States shifting an entire class of vehicles to 100 percent electrification.

In all, sales and purchase standards are the next step in getting clean trucks on the road. Such standards will build on successful purchase incentive programs already in place as well as charging infrastructure investments approved and underway by California electric utilities. This suite of policies mirrors strategies that have made California a leader in electric cars.

What’s next

CARB staff will continue hosting workshops on the proposed sales standard and fleet reporting requirements over the next several months. The CARB Board will have its first formal, but non-voting, hearing of the sales standard and reporting requirements in December. A final version of both will be voted on sometime in 2020.

As the process for developing the sales standard progresses, UCS will be evaluating technology availability and advocating for standards that put the electric truck market on a trajectory that is feasible, ambitious, and necessary to address the public health, climate, and equity problems resulting from truck exhaust.

 

1 “Trucks” refers to vehicles with a gross vehicle weight rating of at least 8,501 lbs, i.e., a large pickup truck and up. Trucks falling into the lightest category include the Chevy Silverado 2500 pickup truck, Ford F-250 pickup truck, cargo van, or a small U-Haul truck. CARB refers to the light end of trucks as “light-heavy-duty vehicle 1” (LHDV1).

2 CARB’s most recent sales numbers indicate 74,149 of Class 2b-3 trucks (of which 44,354 are pickup trucks), 27,182 of Class 4-8 vocational trucks, and 4,837 of Class 7-8 tractor (semi) trucks.

Public Domain Photos: Jimmy O'Dea

It’s Earth Day and these 3 Unique (but Endangered) Species are Giving Me LIFE!

UCS Blog - The Equation (text only) -

Photo: NASA

It’s Earth Day, and this year’s focus is to protect our species. That focus makes me incredibly happy because of three reasons: 1) I get to return to my roots as an ecologist and tell you about some super cool species, 2) there are lots of endangered species that don’t’ receive a ton of attention BUT need attention, 3) this post is not about the Trump administration doing terrible stuff to science (although they haven’t exactly been great to endangered species, you can read about that here, here, and here).

Species #1 – The Ohlone Tiger Beetle

The ohlone tiger beetle, probably waiting to chase some prey.

Some people don’t like insects, but they tap into my awesome nerdy side. As an undergraduate student, I took an entomology class and most of our labs were spent outside catching insects, which was so much fun! But about this cool beetle…

The Ohlone tiger beetle only emerges on land for about 2 months, and it spends that little time mostly hunting. It lurks in the shadows of trails that have been created by cattle and hikers until an unsuspecting passerby comes along and then, BOOM! Dinnertime. The beetle also has been observed chasing its prey in flight. And the larvae of these beetles are no different – the grubs will literally flip backwards to catch prey. Maybe it’s the little kid that is still inside of me, but I really want to see this beetle in action. I imagine if I did, I’d be all like “Wow, bro. That’s sooo cool.” Also, can we just take a minute to appreciate how gorgeous this beetle is?

Unfortunately, the beetle’s population is critically endangered due to loss of habitat to urban development and the impacts of toxic insecticides that come from urban runoff. The species is endemic to California.

Species #2 – The Mississippi Gopher Frog

The Dusky Gopher Frog, once known as the Mississippi Gopher Frog, has an average length of about three inches and a stocky body with colors on its back that range from black to brown or gray and is covered with dark spots and warts.

Who doesn’t love a little frog that’s covered in spots? Or one whose mating call reminds you of the snoring of your significant other (how endearing)? Quite the opposite of the tiger beetle, this critter is not ferocious – this gopher frog places its hands over its little eyes when threatened. I can vouch that this mode of defense is effective, especially when watching horror films.

While this frog used to hop around the Gulf Coastal Plain in Louisiana, Mississippi, and Alabama, a small population of about 200 frogs is all that is left in Mississippi. The species owes its most recent population bump to conservation efforts by US Fish and Wildlife Service (FWS) scientists. These scientists would like to expand their conservation efforts to Louisiana where the frog once lived, but setting aside critical habitat for the species in that state has proved difficult. The decision on whether or not FWS will be able to expand conservation efforts to Louisiana is currently tied up in the courts.

Species #3 – The Southern Bluefin Tuna

If there were a Guinness Book of World Records for fish, the southern bluefin tuna would be highlighted a lot. The bluefin are the largest tuna species and can live up to 40 years. They also can swim to depths of 2,500 meters (that’s about the length of 27.5 football fields). In fact, they can swim to 1,000 meter depth (the length of 11 football fields) in about 3 minutes. “But, Jacob, that’s crazy – the change in temperature from the surface of the water to 1,000 meter depth would be deadly!” You’d be correct for many species, but bluefish tuna are capable of elevating their body temperature up to 20°C above that of surrounding water. Researchers also have found that adrenaline produced from a bluefin tuna’s quick and deep dive helps regulate the beating of its heart. Human hearts could not withstand such a temperature drop – our hearts would fail.

This tuna species is listed in the Guinness Book of World Records once – for being the most expensive single fish sold at a fish market at $3.1 million. Bluefin tuna are prized for their taste and used as sushi and sashimi. These species have been overfished to the point that 85% of the spawning population of this species was lost from 1973-2009. The population is still currently decreasing.

Protect our species

I must admit that I’ve never seen any of these species in the wild, but I’d like to someday. Can you imagine seeing a little green beetle so ferocious that it tries to attack your giant foot along a trail, mistaking a frog’s mating call for the snoring of your tent mate, or seeing a school of bluefin tuna dive thousands of meters below the water surface in a matter of minutes?

While all these species are unique in some way, they all have another commonality: they are critically endangered because of humans. And once a species is gone, we cannot bring it back – we cannot bring back the benefits they bring to our ecosystems, the resources they provide to us, or the joyful experiences they may bring to our lives. Thankfully, scientists and conservationists are working around the clock to help these species populations bounce back. Take a minute on this Earth Day to learn about what you can do to protect our species, and maybe learn a fact or two about an endangered species in your very own backyard.

In the meantime, I’ll be listening to more audio clips of gopher frogs.

Photo: USFWS (Western Carolina University photo/ John A. Tupy)

Legislation to Modernize the California Public Records Act Improves, Advances

UCS Blog - The Equation (text only) -

Photo: Asilvero/CC BY-SA 3.0 (Wikimedia)

UCS-supported legislation to modernize the California Public Records Act (CPRA) advanced through the California Assembly Judiciary Committee earlier this month, and will soon be heard by Assembly Appropriations. Assembly Bill 700 is intended to preserve the ability of researchers at public universities to pursue highly policy-relevant research without being harassed and attacked by companies and activists who are threatened by their work. The legislation has sparked spirited, productive, highly interesting conversations about how to protect researchers while also allowing for full accountability for public institutions and their staff.

As the legislative process plays out, there has been some confusion about the bill and misrepresentation of its intended scope. To provide clarity about the bill’s intent we created this Frequently Asked Questions document. In the paragraphs below, I reproduce some of the main questions in the document after putting them in the context of our current efforts.

Attacks on research have demonstrable harm

Commercial and industry interests and individuals across the political spectrum are increasingly using broad public records requests to disrupt research, attack and harass scientists, and chill professional discussion and debate. This can and has squelched inquiry, discovery, and innovative research that would benefit the public. In California alone, tax preparation companies, the gun lobby, and the chemical industry have all used public records requests to undermine policy-relevant research. And attacks don’t need to be huge in number: an attack on one researcher can discourage inquiry in an entire field. I and others have written about this extensively, and you can find numerous examples in UCS’s Freedom to Bully report and the “Open Records, Shuttered Labs” UCLA Law Review  article by Claudia Polsky.

Tobacco companies, for example, have used open records requests to gain access to records of scientists studying the impact of cigarette marketing on children and adolescents. There are harms to individual researchers, who suffer harassment, high legal and processing costs, and diversion away from their primary work. Some researchers have even left academic research positions altogether or moved to private universities. Yet the broader, more significant harm is to the public who no longer benefits from the results of important research when researchers abandon the field or stop investigating.

Another example: the California Rifle and Pistol Association Foundation sought all correspondence related to an environmental toxicologist’s work. The toxicologist found that lead in ammunition was poisoning endangered California condors, evidence that led the state to restrict lead ammunition. The requests took a toll on the researcher’s ability to pursue funding, put at risk the unpublished data being disclosed and scooped by others, undermined his collaborations with colleagues, and discouraged graduate students from working with him.

Scientists who volunteered their time and helped plug the hole in the ocean during the Deepwater Horizon oil disaster expressed significant concerns about the public disclosure of deliberative scientific materials, whether it be through open records requests or subpoenas.  “Our concern is not simply invasion of privacy, but the erosion of the scientific deliberative process. Deliberation is an integral part of the scientific method that has existed for more than 2,000 years; e-mail is the 21st century medium by which these deliberations now often occur,” wrote the scientists.

“There remains inadequate legislation and legal precedent to shield researchers and institutions…from having to surrender pre-publication materials,” added their institution, the Woods Hole Oceanographic Institute.

An opportunity to modernize the CPRA

So how did this legislation come together? Last year, University of California Berkeley law professor and public interest advocate Claudia Polsky published a law review article on the increasing weaponization of open records laws. The New York Times wrote about the issue, profiling a UC-Davis tax policy expert who was attacked by the tax preparation industry after he spoke out against efforts to prevent the U.S. government from providing free tax filing services.

Soon thereafter, UCS began talking with Assemblymember Laura Friedman’s office about what legislation to address this problem might look like, and agreed we would work together, with input from numerous stakeholders, to craft a legislative solution. In short, our goal is to protect researchers’ scientific work while preserving public access to documents that could demonstrate sexual harassment, research misconduct, funder influence, misuse of funds, illegal activity, or any other conduct or business that is not explicitly part of the deliberative aspects of the research process. Where and how to draw that line is still up for discussion.

At the Judiciary Committee hearing, some committee members cautioned that without a careful approach, attempts to exempt academic materials could go too far and prevent access to information that rightfully should be in the public domain. For example, there was agreement that we would not want legislation to inadvertently make it more difficult to find out when human or animal study participants are mistreated. Thankfully, a majority of the committee recognized the significant risk to public university research from abusive CPRA requests and expressed confidence that a narrowly crafted bill could meet our twin goals of protecting transparency and quality research.

Seeking input to improve the bill

UCS is a strong proponent of transparency and accountability and has actively encouraged and reached out to other California and national pro-transparency organizations to weigh in on the legislation as it develops. We have met with the California Newspaper Publishers Association, the American Civil Liberties Union, the Electronic Frontier Foundation, the Reporters Committee for Freedom of the Press, and several other organizations to discuss how to narrowly tailor an exemption that protects the research process while allowing for discovery of misconduct or any improper influence on that process.

Every one of those meetings has been helpful to understand the ways that the CPRA has helped uncover misbehavior at public universities, and to help craft language that would allow for this kind of accountability to continue. While the organizations named above are not yet supportive of the legislation, I am hopeful that with the right amendments, they will be.

To me, it is encouraging to see thoughtful, reasoned dialogue about how we solve this problem. This is a conversation that has long been needed, and I’m gratified to see it happening in California. The more voices, the better. To that end, UCS welcomes input from a diversity of perspectives, including those who support the legislation in its current form and from those who think it needs to be improved. Progress depends on input from a variety of perspectives.

What should continue to be public

UCS believes it is critical to maintain public access to the vast majority of documents. The CPRA must continue to enable oversight of California’s public universities, including their funding, administration, and independence. Public access is crucial to understand whether university investigations into research misconduct and discrimination and harassment complaints are adequate. It’s also important to understand the potentially biasing influence of funders, not only on the research process but also on how academics use their research to communicate with the public or influence policy.

Toward this end, AB 700 makes explicitly clear that information regarding research funding agreements, communications among funders and researchers and other university staff, records related to governance or institutional audits of compliance, records related to disciplinary action taken against researchers, records that could demonstrate harassment or other improper behavior, or records not explicitly related to the research process will not be exempt.

What we do want to exempt from disclosure

AB 700 would protect a limited and narrow set of documents where more privacy encourages cutting edge research with little cost to public understanding of that research. This includes unpublished data, unfunded grant applications, and information that could compromise the privacy of research study participants. It also includes communications among researchers during the research process that would preserve their ability to have frank conversations with their peers about the research itself. Access to a researcher’s email correspondence with academic peers to test out and refine ideas provides little to allow the public to better understand the research itself.

Instead, it can and does chill conversation and discourage scientists from fully criticizing the work of others and from pursuing research questions they know will open them up to attacks from powerful interests. Public universities and professors are rightfully subject to open records, but that right should not be absolute. We all deserve the freedom to test new ideas, even and especially when they are contentious. Regardless of your line of work, can you imagine if every email you wrote, every comment you made, or every honest criticism of a colleague’s work was placed in the public domain?

It may be challenging to craft bill language that advances accountability while protecting researchers’ ability to pursue public interest research, but I’m confident that we can meet these twin goals.

Air Pollution Should be Monitored Using the Best Available Science: Meh, Says the EPA

UCS Blog - The Equation (text only) -

Air pollution causes serious harm to our society – from coughing, to smog in the air, to a visit to the emergency room. And the only way to mitigate the threat of air pollution is to use the best available science and technology to measure it accurately. The Environmental Protection Agency (EPA) appears to disagree. The agency has quietly finalized a rule that ignores its mission to protect human health and the environment by instead focusing on saving industry money.

The change is modifying a 21-year old rule, called the Nitrogen Oxides State Implementation Plan Call, which was designed to curb the emissions of nitrogen oxides (NOx) from industrial facilities in 20 states, mostly on the US East Coast and the District of Columbia. Power plants, as well as large steel, aluminum, and paper manufacturers in these states will now have the option to pick “alternate forms of monitoring” for NOx – none of which are specified – instead of the current standard of “continuous emission monitoring systems” or CEMS. The EPA has justified modifying this rule as a potential cost-saving measure to industry.

CEMS are considered the “gold standard” for source monitoring and are just like what they sound like, technology attached to industrial exhaust stacks that continuously measure air pollution levels. CEMS have proven to be incredibly effective at monitoring NOx pollution for the EPA’s Acid Rain Program (producing highly accurate data 95% of the time). As a result of this scientific evidence, CEMS was codified into regulation as a required monitoring tool for all large electrical and steam-producing industrial sources in 20 states and DC.

The new rule is designed to collect less high-quality data

In the rule’s text, the EPA admits that ditching the requirement for CEMS might allow these facilities “to perform less extensive data reporting or less comprehensive quality-assurance testing,” and that “monitoring approaches may be expected to provide less detailed monitoring data and require less rigorous quality assurance” (emphasis added). The rule could discontinue continuous monitoring of a dangerous pollutant in hundreds of the nation’s largest industrial facilities, leading to data gaps that could significantly challenge the ability to curb NOx emissions.

This is extremely problematic. Measuring air pollution gives us a warning when things go bad, like a canary in a coal mine. If we don’t measure air pollution using the best science available, how can we have enough high-quality information to protect the health and safety of communities living nearby these facilities, communities that are already at risk of breathing in high levels of toxic air?

Nitrogen oxide emissions need to be curbed, and the previous system was working really well

NOx is a family of poisonous gases that can cause you to cough and wheeze, sometimes badly enough to require a visit to the emergency room (especially if you have asthma). This pollutant has a bad habit of combining with other substances to form smog and acid rain. New research has even suggested that NOx is a likely cause of asthma and a risk factor for the development of lung cancer, low birth weight in newborns, and an early death. And the risks are heightened for asthmatics, children, and the elderly.

Fossil-fuel electric utilities are one of the primary sources of NOx pollution and, thanks in part to state and federal regulation, NOx pollution levels from these sources have dropped by 82% over a 20-year period, according to EPA data (1997 to 2017).

Where did this rule come from?

Impetus for the 2019 rule likely originated from the Association of Air Pollution Control Agencies, a shadow group of state air regulators that don’t recognize the federal government’s authority – upheld by the Supreme Court – to regulate greenhouse gases. In a 2017 filing, the group described the continuous monitoring requirement as “overly burdensome” and costly to businesses outside the power sector.

Clint Woods, who was the association’s executive director at the time, is now deputy chief of the EPA’s Office of Air and Radiation. Woods has been previously implicated in suppressing a study detailing the cancer risks from formaldehyde, a clear attack on science and public health. Woods has also influenced the selection of the EPA’s Clean Air Scientific Advisory Committee (CASAC), which is now staffed with individuals who lack the expertise to provide adequate scientific advice.

Part of a pattern of sidelining science in Trump’s EPA

This is not the first time that the EPA, under the Trump administration, has sidelined air pollution science. Industrial facilities now have the opportunity to use less stringent control mechanisms for hazardous air pollutants, like mercury and benzene. A scientific advisory board of experts that once provided valuable information on particulate matter air pollution has now been dissolved. And establishing an air pollution standard may soon require economic considerations instead of being solely focused on improving public health – again, an abdication of EPA’s mission to protect human health.

One of the most important benefits that science can bring to the federal arena is to ground policy in an evidence-based approach. Air pollution policy fundamentally depends on obtaining high-quality, accurate data in order to make any significant health improvements in our communities. By disrupting the collection of high-quality data on a dangerous pollutant, the EPA disregards best scientific practices, decreases its own ability to properly monitor NOx pollution from industrial sources, and undermines its mission to protect public health. If we don’t have an accurate measure of how much NOx pollution is escaping from these facilities, how on earth are we supposed to stop it from causing real harm to our people and our environment?

Public Domain

Climate Justice Requires Prioritization of The Poor and Vulnerable

UCS Blog - The Equation (text only) -

Photo: Barry M. Goldwater Historic Photograph Collection

The legend of the mythological Phoenix tells the story of a “female sacred firebird with beautiful gold and red plumage”. It was said that at the end of its centenarian plus life-cycle, the Phoenix ignited herself among a nest of twigs, and, reducing itself to ashes, a new young Phoenix would arise from the smolder.  It’s a fitting metaphor for Phoenix, Arizona, a relatively young city at 150 years, yet located in the Salt River valley, a Sonoran Desert region that has been inhabited and abandoned by people for thousands of years before its current form as a sprawling metropolitan area.

The history of human occupation of the Salt River Valley, is, indeed, a story of birth and rebirth, starting with the Hohokam people, who by the 13th century had engineered an extensive network of irrigation canals for subsistence farming. The Hohokam were remarkably well-adapted to their arid environment. In the absence of sufficient rainfall, they were the only North American culture that used irrigation canals to water their crops. The Hohokam peoples disappeared from the Valley for reasons that are not completely understood, however we know that between AD 1350 to 1450, their population declined drastically, and disappeared from the archaeological record. In the recent past, the Akimel O’odhom (or O’otham) native people have continuously farmed this area for hundreds of years – and live on as a federally-recognized tribe.

Fast-forward to the late 1800s: Phoenix is “reborn” from the ashes of the abandoned Hohokam canals in the Salt River valley, when miners, farmworkers, ranchers, and soldiers rebuilt the irrigation canals to create profitable, export-oriented agriculture. Even at this point in the modern history of the U.S. Southwest, we begin to see the emergence of climate vulnerable communities – an imprint that’s still visible today.

Physiological and structural dimensions of heat vulnerability

But wait – isn’t the late 1800s a little bit too early to talk about frontline and other climate-vulnerable communities? Not according to our new research  that highlights how choices made pre 1900 have reverberated into our current climate crisis.  In our contribution, called “Pathways to Climate Justice in a Desert Metropolis”, we argue that in Phoenix, there are two distinct but intertwined dimensions of heat-related vulnerability: one is physiological; the other structural. Physiological vulnerability to heat is dependent on pre-existing illness, old age, or being exposed to outdoor work, often compromising the human body’s capacity to keep internal temperature near 37.0°C (98.6°F) to avoid death or illness. The public health evidence is clear on this: statistics for Maricopa County in 2016 show that males, people over 50 years of age, people experiencing homelessness, people of color, the poor, the socially-isolated, those without AC at home, or with pre-existing cardiovascular or respiratory disease were significantly overrepresented in heat-related deaths and non-fatal illnesses. The other dimension is structural and can traced back to the late 1800s when the city was founded. Historical racial segregation of people mostly poor and disproportionately of color has resulted in a stratum of the population that is residentially vulnerable to climate change.

 

 

Slums along the Salt River irrigation canals where Mexicans and other non-Whites were segregated contrast with elegant housing in uptown in early 20th century Phoenix

 The segregation origins of climate injustices in Phoenix: “Mexican Tenements”

In our research we found, for example, old fire insurance maps that showed that in the early 1900s, Mexicans and other non-Whites were segregated to South Phoenix, the “wrong side of the [railroad] tracks”, where housing was improvised and substandard.  In a time before formal land use regulations, fire insurance maps provide evidence of de facto segregation, as the more unsanitary quarters of the city were designated for Mexicans and other non-Whites to live in. In old photographs from Gov. Barry Goldwater’s collected papers, we found great contrast between slums along the irrigation canals in South Phoenix—where farmhands and their families lived in squalor–, and the elegantly manicured landscapes and houses of the well-heeled in areas north of downtown Phoenix. This is evidence of what geographers focused on social disparities call “uneven economic development”, which refers to inequitable concentrations of wealth in some areas, and squalor and clusters of industrial and other unwanted land uses in others. Environmental contamination from facilities that store or process toxic chemicals, and little shading vegetation were, and continue to be, prominent features of the impoverished areas of South Phoenix, which today are among the hottest and most polluted areas of the Phoenix metro region.

Race-based segregation has existed in Phoenix since the early 1900s

Hang on a second – isn’t Phoenix hot for everybody?

Everyone who lives in Phoenix is affected by the naturally hot desert environment and increases in temperatures from global climate change are uniform throughout the region. But the Phoenix urban core (where many low-income people of color live) warmed faster (6°C/10.8°F) between the 1940s and the present than areas on the urban edge (3°C/5.4°F) in the same period, a gradient of urban-generated heat that is unevenly distributed, and that varies spatially according to topography, land cover, and wind patterns. In addition, not everyone has the same capacity to fend off the worst health effects, nor are people exposed to outdoor heat the same way. Low-income communities have less economic and other resources to help them avoid the worst consequences of extreme heat. For example, the poor have less access to air conditioning (either to have a unit in their home or pay for its use), a key way in which heat deaths are avoided. Similarly, lack of access to preventative health care to deal with pre-existing conditions that can trigger hospital visits during extreme heat episodes (such as cardiovascular disease, obesity, or diabetes), is common among those living in poverty. Further, many low-income workers in Phoenix work outdoors in construction and landscaping, exposing them directly to heat.

The Phoenix urban core warmed much faster than areas on the urban edge due to the Urban Heat Island

Climate adaptation pathways that benefit the poor and vulnerable

Many prescribed adaptations such as electricity for AC, mobility to escape the heat, irrigation for residential shading vegetation, and adequate housing are out of reach for most low-income households, and their neighborhoods also have less trees, parks, and other heat-reducing green spaces.  There are also barriers at larger scales in Arizona– politicians there are mostly opposed to addressing climate change; policies to improve residential indoor cooling like home energy assistance programs reach just 6 percent of eligible persons; substandard housing makes AC-based cooling unaffordable; and grassroots activists are more focused on priorities that affect Latinos such as immigration and the economy than on climate change.

In light of the individual and structural inequities that shape heat vulnerability in Phoenix, what are some ways in which climate adaptation can be realistic for the most vulnerable? We think that an explicitly pro-poor adaptation framework can help identify the physical, financial, human, social, and natural assets of low-income communities that can be harnessed to reduce vulnerability— and this requires purposeful engagement with low-income communities. We concluded our chapter by suggesting pro-poor interventions pathways in metro Phoenix:

  • Reordering of state and local government social priorities – In Phoenix there are large deficiencies in the poor population’s needs for food, shelter, education, health care, and living wage work, which influence climate vulnerability. The public sector has an obligation to help address these.
  • Increasing public subsidies for cooler indoor and outdoor environments – Green spaces, residential weatherization programs, and energy subsidies need to target low-income households, including renters.
  • Support for bottom-up initiatives to community problems – Meaningful engagement with vulnerable communities in decision-making can empower communities to not only focus on immediate issues like jobs or immigration, but also to take on existing and future climate risks.

It wasn’t lost on us that it’s very probable the original Hohokam settlement disappeared because of climate change – but we wonder if the settlers of modern Phoenix considered that. And who can forget that Phoenix has been called “The World’s Least Sustainable City”, while others ask if it’s on the brink of becoming uninhabitable? Adaptation policies must embrace climate justice at the local level while recognizing both the global scale of the climate system and the socio-spatial variability of climate impacts within regions like metro Phoenix. Pro-poor adaptation pathways must do so by calling for a fair distribution of social and climate burdens.

Photo: Barry M. Goldwater Historic Photograph Collection (FP FPC 1, Box 8, Folder 1. Historic Photographs, Places: Canals and Irrigation. 1890-1901. Arizona Collection, Arizona State University Photos: – Barry M. Goldwater Historic Photograph Collection Sanborn Fire Insurance Map from Phoenix, Maricopa County, Arizona

Pompeo Opens the Door to Deep US Nuclear Cuts (Or Large Chinese Increases)

UCS Blog - All Things Nuclear (text only) -

April 10, 2019: Oregon’s Senator Jeff Merkley questions Secretary of State Mike Pompeo about new nuclear arms control negotiations with China.

Secretary of State Mike Pompeo told the Senate Foreign Relations Committee the Trump administration wants China to join negotiations on the New Strategic Arms Reduction Treaty (New START). The treaty, which caps the number of deployed US and Russian nuclear warheads at 1550 each, is scheduled to expire in 2021.

China has a no first use policy and is believed to store its warheads separately from its missiles. Under the definition of the current treaty, China would therefore have zero deployed weapons.

It is difficult to believe Pompeo seriously considered the implications of Chinese participation in the New START treaty. If China were to become a party to the agreement, it would expect to be treated as an equal. There would need to be common limits on the number of warheads and launchers each country would be permitted to retain and deploy.

Making China subject to the same restrictions as the United States and Russia would present both countries with a very difficult choice. They would have to decide whether to reduce their numbers of deployed warheads to zero or to allow China to engage in a massive nuclear build up to match US and Russian numbers. They could agree to change the terms of the treaty to include both deployed and stored warheads, which would capture China’s warheads, but then it would also capture the additional 2,000-3,000 warheads that both the United States and Russia have in storage. US and Russian negotiators would then need to find a way to eliminate an even larger disparity in numbers between their countries and China.

A Quick Look at Those Numbers

China currently has a few hundred nuclear warheads and enough weapons-grade plutonium to make several hundred more. The United States has 4,000 nuclear warheads (active and reserve) and enough weapons-grade plutonium to make approximately 5,000 more. Current US estimates indicate China can deliver about 140 of those nuclear warheads to targets in the United States with its approximately 80 ground-based intercontinental ballistic missiles (ICBMs) and 60 submarine launched ballistic missiles (SLBMs). The United States could deliver as many as 800 nuclear warheads on its 400 ICBMs and a maximum of 1,920 warheads on its 240 SLBMs. The US arsenal also currently includes 452 nuclear gravity bombs and 528 nuclear-armed cruise missiles that are delivered by aircraft. China does not currently deploy any of its nuclear weapons on aircraft.

The current New START agreement caps the total of US and Russian deployed ICBMs, SLBMs and heavy bombers equipped to carry nuclear weapons at 700. Assuming new negotiations produced a 50% reduction in those numbers—a result many nuclear arms control experts would justifiably herald as a stunning success for the Trump administration—China would be allowed to add several hundred ICBMs and SLBMs to its current arsenal

A significantly larger Chinese nuclear arsenal doesn’t sound like a very good outcome for the United States or its Asian allies.

So What was Pompeo Thinking?

One possibility is that the secretary thinks China’s nuclear arsenal is much larger than it actually is, perhaps due to misinformation circulated in Washington a few years ago.

Not long after Pompeo won his seat in Congress an adjunct professor at Georgetown University submitted a Pentagon-funded report suggesting China had approximately 3,000 nuclear weapons buried in a network of tunnels. The study received a favorable review from the Washington Post and gained some currency on Capitol Hill.

But the study was very poorly done. Its conclusions were based on spurious Chinese sources found by Georgetown undergraduate students using keyword searches of public Chinese websites. Their professor, Phillip Karber, misrepresented a collection of general questions about China’s nuclear arsenal posted on personal Chinese blogs as a secret Chinese military document stating China’s nuclear arsenal was ten times larger than current US estimates.

Peter Navarro, one of the leading voices on China within the Trump administration, cited Karber’s numbers in one of his books. He also claimed China’s leaders were so confident in the success of their nuclear modernization program that they were willing to start a nuclear war, a claim that appears to have influenced the Trump administration’s Nuclear Posture Review, which suggests China would resort to nuclear first use if it were losing a conventional war with the United States. It’s possible Pompeo’s thinking has been influenced by this kind of talk in the White House.

The next time Secretary Pompeo appears before Congress, someone should ask for his views on the size of China’s nuclear arsenal and his assessment of the Chinese policies guiding how and when it might be used.

Another possibility is that the secretary is using the requirement of Chinese participation to try to diminish expectations for an extension of the New START treaty. Lack of Chinese inclusion was one of the principal reasons cited by the Trump administration in its decision to withdraw from the INF treaty. There is no indication Pompeo approached China about joining the INF treaty or participating in negotiations on the New START treaty. Perhaps that’s because he assumes China isn’t interested.

China’s Response

China consistently rejects multilateral arms control negotiations. It prefers international negotiations under the auspices of the United Nations. So trilateral talks with Russia and the United States are unlikely. However, Chinese arms control experts often point out that China would be willing to enter into multilateral nuclear arms control negotiations when the United States and Russia reduce their numbers to levels approximately the same as the rest of the nuclear weapons states, which hover in the middle hundreds rather than in the thousands.

Chinese President Xi Jinping should seize the opportunity presented by Pompeo’s remarks to engage the United States on Chinese participation in New START talks that would result in dramatically lower limits on the size of US and Russian nuclear forces. Chinese arms control experts seem prepared to tackle the difficult technical issues involved in the verification of an agreement on deep nuclear cuts. They’ve been discussing verification issues at international arms control conferences ever since China signed the Comprehensive Nuclear Test Ban Treaty (CTBT) in 1996.

Secretary Pompeo may not have intended to open the door to substantial nuclear reductions, but Xi would be remiss if he let this opportunity slip by without at least making an effort to further the discussion.

Key Questions Answered in the USDA’s New Census of Agriculture

UCS Blog - The Equation (text only) -

USDA photo

Last week, the US Department of Agriculture released the findings of its latest 5-year Census of Agriculture (the 29th in the series, with data collected from the nation’s farms and ranches in 2017), providing an eagerly anticipated update to the nation’s most comprehensive agricultural dataset. Ahead of the release, I posted my top 4 questions for the new Census. Now that it’s out, what have we learned?

1. Yes, farmers are (still) getting older.

According to the new data, US farmers continue to get older. The average age of all farmers (or “producers” in the Census) went up from 56.3 to 57.5 years, and the average age of “primary producers” increased from 58.3 to 59.4 years. Furthermore, most farmers are still overwhelmingly white (95.4 percent) and male (64 percent).

However, the number of female farmers reported increased by 27 percent since 2012, whereas the number of male farmers reported declined by 2 percent. Note that this result reflects the effectiveness of the USDA’s changes to demographic questions, which allowed farms to list more than one producer engaged in farm decision-making.

2. Yes, farm consolidation continues to increase.

New data on farm economics show continued declines for both farm numbers (3 percent) and acres (2 percent), due to losses among mid-sized farms. Meanwhile, average farm size has increased by 2 percent. And the largest farms (with $1 million or more in sales) represented just 4 percent of total farms but accounted for a disproportionately high amount (69 percent) of total sales. Similarly, only five commodities (cattle and calves, corn, poultry and eggs, soybeans, and milk) represented 66 percent of all sales.

The latest findings also suggest that staying profitable is getting tougher on US farms. While expenses decreased slightly (mostly from lower feed costs), increased labor costs and declining value of agricultural production dented profits. Ultimately, farm incomes decreased, averaging $43,053, and only 44 percent of farms had a positive net cash farm income. Farmers also relied more heavily on government payments, which were 11 percent greater than in 2012. Likely related to these challenges, 58 percent of farmers reported that they have a primary occupation other than farming.

3. There is some evidence that healthy soil farming practices are catching on.

The most exciting news in this category is that cover crop use increased. The area planted in cover crops expanded from 10 million acres in 2012 to 15 million acres in 2017. The census also started collecting data on cover crop seed purchases, creating a benchmark for future years (115,954 farms purchased cover crop seed worth $257 million). Other conservation practices that increased included no-till farming (from 96 to 104 million acres) and conservation tillage (from 77 to 98 million acres). Though the gains are relatively small, they track the excitement we’re hearing anecdotally from farmers.

Interestingly, 30,853 farms reported using agroforestry practices, including alley cropping, silvopasture, forest farming, and riparian forest buffers or windbreaks, providing another useful benchmark for future years. This number is notably higher than data from 2012 in a similar, but much narrower, category, which found that only 2,725 farms reported using alley cropping or silvopasture.

4. New questions in the 2017 census offer new insights.

Thanks to new categories of data collection, we now know that 11 percent of farmers have a military background (i.e., currently or previously served on active duty in the US Armed Forces), and 17 percent of all farms include a farmer who has served in the military. We also now know more about farm decision-making than previous results were able to track. For example, female farmers are most involved with day-to-day decisions, record keeping, and financial management. Young farmers are more likely than older farmers to make decisions related to livestock.

Plenty more insights to come

My first glance at the data was enough to offer initial answers to my top questions, but only scratched the surface. After all, even the summary document is 820 pages long, the numbers themselves present the opportunity to explore an endless number of questions, and—as I mentioned last week—there’s a lot more data on the way!

In the meantime, a few takeaways do emerge. First, there are plenty of areas where the answers provided to my questions raise, well… more questions. For example, given that the farming population continues to grow older, and that consolidation continues, what solutions can we develop? Addressing these questions will likely require even more investment in USDA’s budget for research and economic analysis—not less. But to end on a positive note, the new census provides yet another piece of evidence for a different growing trend: farmers are increasingly adopting and recognizing the benefits of farming practices that build soil health. Now that’s good news. With the countdown to the 2022 Census of Agriculture beginning, let’s dream big about what we can do now to see an even more positive story in the 30th edition.

 

Administrator Wheeler is Hiding the Truth About Formaldehyde

UCS Blog - The Equation (text only) -

Photo: Mike Mozart/Flickr

In a letter sent this week, the Union of Concerned Scientists along with the Environmental Defense Fund, Natural Resources Defense Council, and Environmental Protection Network asked EPA’s Scientific Integrity office to investigate what seems to be political interference that occurred at the EPA in its recent suspension of the Integrated Risk Information System (IRIS) formaldehyde risk assessment. In his responses to senators’ questions about the assessment earlier this year, Wheeler claimed that “Formaldehyde was not identified as a top priority.” Political appointees at the agency gave the same answer when asked by the GAO, in a recent report. But, in documents obtained through FOIA request, the Union of Concerned Scientists found evidence that EPA staff was not only interested in the formaldehyde risk assessment, but as of 2017 the air office had a “strong interest in the review and are anxious to see it completed” and told EPA’s acting science advisor, Jennifer Orme-Zavaleta that “we have consistently identified formaldehyde as a priority.” Thus, the glaring omission of formaldehyde among the EPA’s list of prioritized chemicals issued this month smells more like political interference than lack of importance to me.

In a 2017 email obtained through FOIA request by the Union of Concerned Scientists,, an EPA staffer from the air office writes that formaldehyde is a priority for the program.

What happened to the formaldehyde assessment?

We know that the formaldehyde assessment was done and ready for review in the fall of 2017 and then all movement of the draft mysteriously stopped. In July 2017, the head of the National Center for Environmental Assessment which houses IRIS, Dr. Tina Bahadori, wrote to the former Office of Research and Development (ORD) head, Richard Yamada and other ORD staff to talk about a briefing that would occur that month on formaldehyde, and to inform them that “we have contracted the [National Academy of Sciences] to peer review this assessment. As a part of that agreement, we have requested that they convene a PUBLIC workshop in which they also gather data on NCEA/IRIS activities to be responsive to the NRC 2011 and 2014 recommendations to improve IRIS assessments.” The latter part of this agreement occurred in February 2018, but if the NAS was contracted to peer review the assessment, why did the EPA fail to follow through with moving the formaldehyde assessment to the next stage of the process if it was ready and the path was set?

Wheeler wrote this year that the EPA’s air and chemicals offices didn’t provide a list of priorities to him when he asked in 2018. But a non-response doesn’t mean the air office is lacking in priorities. A responsible administrator would follow-up with the program office pointing out the many hazardous air pollutants that have outdated risk values, including formaldehyde. Unless, of course, the administrator was actively working to keep formaldehyde off of the priority list to placate the chemical industry.

After all, the facts haven’t changed—formaldehyde is just as dangerous today as it was a year ago. It seems that political appointees at the EPA are playing a game of defeat-by-delay—willfully remaining ignorant of the facts by simply declining to to listen to the scientific opinions of their own staff experts.

Unfortunately for these appointees, they have a job to do—protect public health, based on the best available science. And they can’t evade their duties by pretending science doesn’t exist. That’s why we have scientific integrity policies in place—to make sure political interests don’t overrule the clear facts and the public good.

Emails show experts’ concern—and political leaders’ indifference

In the aforementioned email from the director of the Health and Environmental Impacts Division at the Office of Air Quality and Planning Services (OAQPS), Erika Sasser, to Jennifer Orme-Zavaleta, she lays out how an updated risk assessment for formaldehyde would help the air office better protect public health. According to her, “having a current cancer unit risk estimate for formaldehyde is critical for the agency’s air toxics program, for use in 1) the National Air Toxics Assessment (NATA), 2) the Clean Air Act (CAA) section 112 risk and technology review (RTR) rulemakings, 3) evaluation of potential risks from on-road and nonroad mobile sources regulated under relevant sections of the CAA, and 4) regional and local-scale risk assessments.” Formaldehyde is not just an incidental air pollutant. Sasser wrote that, “more than 1.3 million tons of formaldehyde are emitted each year. While these emissions are from both natural sources and from stationary mobile anthropogenic sources, the [National Emissions Inventory] estimates that 42,000 industrial facilities emit formaldehyde. The National Air Toxics Assessments (NATA) shows that the entire US population is exposed to formaldehyde.” Sasser’s email was seen by politicals at the agency, forwarded to ORD’s Yamada by Bahadori.

In other documents we received from the EPA, it is clear that Dr. Bahadori spent months trying to get Yamada’s attention to the formaldehyde assessment and its release. In September, the American Chemistry Council wrote a letter to IRIS related to its draft formaldehyde assessment. NCEA’s Bahadori wrote back to the American Chemistry Council’s in October 2017 saying “we hope to complete the draft of this assessment as expeditiously as possible and make it available for public comment and peer review by the National Academy of Sciences (NAS)” and “the only way to demonstrate our commitment to a scientifically robust and transparent formaldehyde assessment is to present the document for public comment and rigorous peer review by the NAS.”

On December 7, 2017, Bahadori wrote to Orme-Zavaleta and Yamada, “Just checking to see if you have an update on path forward for formaldehyde?” She followed up on December 20, 2017: “I wanted to follow up on the path forward for formaldehyde.” After getting a non-committal response from Orme-Zavaleta, Bahadori followed up again on January 2, 2018, “I wanted to follow up and see what the timeline for next steps might be for formaldehyde.” Bahadori was clearly doing her best to push the study through the political roadblock and was ignored. Now, she is being moved away from the IRIS program through the Office of Research and Development reorganization, which sources have told InsideEPA (paywalled), is likely as a result of her “efforts to advance IRIS.” Only in today’s EPA is the penalty for defending one’s own scientific program to be moved far away from leading that very group.

Dr. Tina Bahadori tried multiple times to get the former ORD head, Richard Yamada, interested in moving the IRIS formaldehyde assessment through the publication process.

A risk assessment caught up in layers of interference

We know the draft is done and was completed using rigorous scientific review methods, so why not just move it to peer review and public comment? The answer is simple: industry doesn’t like the findings that formaldehyde is a carcinogen. This assessment has been held up for over a decade thanks to pushback from the American Chemistry Council, that we have documented as a part of our Disinformation Playbook. And now, thanks to corporate capture of the current administration, top political officials appear to be doing the same thing from the inside to benefit their former employers and cronies. Former ORD head, Richard Yamada, was previously employed by long-time IRIS and formaldehyde-study-critic Lamar Smith. Current ORD head, David Dunlap, is a former staffer with Koch Industries of which a major formaldehyde emitter, Georgia Pacific, is a subsidiary. He has recused himself from matters pertaining to formaldehyde, but the agency’s track record on sticking to ethics agreements doesn’t give me the utmost confidence in his pledge.

Bill Wehrum, assistant administrator for the Office of Air and Radiation at EPA had a long list of industry clients (subscription required) at his lawfirm before joining the agency, and has been ignoring offers from his own scientists to brief him on the chemical. And let’s not forget Nancy Beck, a former American Chemistry Council staffer now responsible for implementation of the Toxic Substances Control Act (TSCA) who has spent her tenure at the EPA checking industry’s demands of its wishlist. Formaldehyde will now be taken on by her office, which will mean a longer timeframe and a less comprehensive risk evaluation.

EPA’s scientific integrity office must investigate

As we write in our letter, “The completion and release of the IRIS assessment on formaldehyde would help inform science-based EPA regulations to better protect public health from this chemical. Conversely, permitting the suppression of this study to persist unchecked normalizes political interference at the agency and sends a message to career staff that their knowledge and expertise is not valued.” The EPA’s Scientific Integrity Policy “prohibits all EPA employees, including scientists, managers, and other Agency leadership, from suppressing, altering, or otherwise impeding the timely release of scientific information.” The public has the right to know whether this has occurred in the suspension of the formaldehyde risk assessment at the EPA. Every day that goes by without the scientific information informing new technology and standards that could reduce formaldehyde exposure and related health risks is an egregious affront to the agency’s mission to protect public health.

Check out the rest of the documents we received from the EPA related to formaldehyde here.

Photo: Mike Mozart/Flickr

The Way We Talk About Geoengineering Matters

UCS Blog - The Equation (text only) -

Photo: NASA

Solar geoengineering describes a set of approaches that would reflect sunlight to cool the planet. The most prevalent of these approaches entails mimicking volcanic eruptions by releasing aerosols (tiny particles) into the upper atmosphere to reduce global temperatures – a method that comes with immense uncertainty and risk. We don’t yet know how it will affect regional weather patterns, and in turn its geopolitical consequences. One way we can attempt to understand potential outcomes are through models.

Models are representations of complex phenomena that are used to approximate outcomes. While they have limitations, they are an important tool to help scientists and decisionmakers understand potential futures based on scientific, technological and policy changes. With both potential and profound risks and uncertainties, we need more expansive modeling research on solar geoengineering techniques – not only to understand possible environmental impacts and risks, but political and social consequences as well.

Without looking at this broader range of outcomes, the messaging behind solar geoengineering can then lead to simplifications and mischaracterizations of its potential in the media. In spaces where public familiarity is low and risks are high, scientists and journalists should both be responsible for capturing the nuance and complexities around geoengineering – only a full picture will enable an informed public debate.

How we use modeling must evolve

In the case of solar geoengineering, models offer the opportunity to examine questions on a global scale – a scale at which real world experiments aren’t yet feasible or ethical.  A small set of researchers have been examining the potential outcomes of solar geoengineering through modeling impacts for several years. This research has been valuable in gaining a deeper understanding of the possible consequences of deploying solar geoengineering. However, many of the scenarios analyzed have been under idealized, or “best case” conditions – in other words, we’re not comprehensively looking at what could go wrong.

And as we all know too well, the real world rarely imitates the best-case scenario. An example that comes to mind is that of DDT. Developed as an insecticide, DDT was extremely effective at reducing mosquito populations for a number of years during and after World War II. However, widespread use of the chemical led to massive environmental harm due to a failure to thoroughly investigate its impacts before widespread use – impacts that were not accounted for.

With more attention being paid to solar geoengineering, researchers need to explore a more meaningful range of deployment scenarios to understand risks and tradeoffs under a much broader set of conditions. Modeling is most helpful when used not just to predict a particular outcome under the best-case conditions, but rather to learn about many possible futures. With climate change, researchers have studied technical, economic and political narratives to capture a more realistic set of outcomes, and a similar strategy needs to happen for geoengineering. Only when research is done to know what can go wrong – in addition to what can go right – can we have a clearer idea of what the use of solar geoengineering could potentially entail.

In other high-risk fields, we require a high level of investigation about what could go wrong. Military war gaming exercises are a prime example: simulations of best- and worst-case scenarios are conducted by the government to see how politics, military strategy, and potential outcomes could interact in a myriad of ways – all before real combat takes place. Just as its the responsibility of a carpenter to “measure twice and cut once,” generals and admirals investigate war scenarios in order to save lives and minimize collateral damage. Solar geoengineering merits the same level of analysis.

Messaging and media portrayals can be dangerously misleading

Despite the risks of oversimplification, a new optimistic study titled, “Halving warming with idealized solar geoengineering moderates key climate hazards” was recently published in Nature Climate Change. Written by scientists at Harvard, Princeton, MIT, and the Georgia Institute of Technology, these researchers modeled a simplified proxy of solar geoengineering to counter half of future climate change (estimated as a doubling of carbon dioxide).

They found that under these specific conditions there could potentially be a decrease in some climate impacts (such as temperature, water availability, and tropical storm intensity) across regions. However, in addition to other limitations, the study used an idealized solar geoengineering system in the model – in other words, simply turning down the sun without the use of a particular approach, like aerosols. This can be helpful to understand aspects of solar geoengineering, but without a technology in place, it’s not realistic to make assertions about who might be worse off since use of that technology would come with its own set of risks.

With a lack of realistic exploration of solar geoengineering, the messaging behind the technology led to overstated conclusions and mischaracterizations of its impacts in the media. While the authors were upfront about their use of an idealized scenario in the title of the journal article, some media stories focused on the benefits of solar geoengineering with limited discussion of the modeling constraints. Researchers must be responsible for putting their results in the context of its overall significance. The lack of doing so led to many article headlines framing the study as having much broader implications than merited. Some of these include:

“The Side Effects of Solar Geoengineering Could Be Minimal”- WIRED

“The case for spraying (just enough) chemicals into the sky to fight climate change: A new study says geoengineering could cut global warming in half — with no bad side effects.” –Vox 

“Upon reflection, solar geoengineering might not be a bad idea” -The New York Times (subscription required)

“Radical plan to artificially cool Earth’s climate could be safe, study finds” –The Guardian

“Solar geoengineering could offset global warming without causing harm” –Axios 

In an era characterized by 280-character tweets, headlines matter. These oversimplifications from reputable news organizations do a disservice to geoengineering discussions. If readers moved past the headlines, they’d find that while journalists and authors often qualified the findings, there were extremely mixed messages about the real meaning of these results. Just as importantly, we need studies that would characterize a more realistic range of scenarios. As a newly emerging topic for public debate, it is crucial that solar geoengineering is presented in an accurate way. False impressions will only harm us when society needs to make critical decisions on how to approach it.

Photo: NASA

Make Electric Vehicle Rebates Available at the Point of Purchase

UCS Blog - The Equation (text only) -

New legislation proposed in Massachusetts would take a critical step towards making electric vehicles (EVs) affordable to consumers, by offering rebates to consumers at the point of sale.

While Massachusetts offers rebates for electric vehicles through its “MOR-EV” program, Massachusetts currently does not offer rebates at the point of purchase. Instead, customers who purchase an electric vehicle must fill out this application, identifying the VIN number, the purchase details, the dealership and the sales person. If there is still funding available when you make your purchase (and the program is constantly on the verge of running out of funding) the state sends the applicant a rebate check up to 120 days later.

Further, beginning in 2019, MOR-EV rebate levels were cut to just $1,500 for battery electric vehicles and $0 for plug in hybrids. Massachusetts has been forced to cut rebate amounts because the state has not developed a sustainable funding source for MOR-EV. Even with the cutbacks, the program is set to run out of funding in June. Given the central role of EVs in achieving the state’s climate limits, this is a critical issue that must be dealt with by the legislature immediately.

A budget amendment proposed by Representative Jonathan Hecht would address these problems by creating a new instant rebate of for low- and moderate-income consumers. In addition, the Hecht amendment would restore MOR-EV rebate amounts to the level they were in 2018 ($2,500 for battery electric vehicles and $1,000 for plug in hybrids). Taken together, Rep. Hecht’s legislation would make EVs a viable choice for most new vehicle purchasers.

For example, under the Hecht proposal, a middle-class customer interested in a Chevy Bolt with Quirk Chevrolet through Green Energy Consumers Alliance’s Drive Green Program might be able to lease the vehicle for no money down, and an equivalent lease rate of $150 per month on a 36 month lease. That is a great deal for a great car that will improve our environment, our public health and our economy.

We need to make EVs affordable for more drivers

MOR-EV is an important program. Its goal of encouraging the electric vehicle market, so that economies of scale would improve quality and reduce price, remains well founded. Yes, many of the direct beneficiaries are early adopters, tech enthusiasts and people with high incomes. But those initial investments have driven down costs and made these vehicles more accessible.

Today, the challenge facing EVs is how to bring the technology to all drivers. Analysis conducted by the state agencies demonstrate that widespread electrification is necessary to hit the requirements of Massachusetts’s important climate law, the Global Warming Solutions Act. Passenger vehicles are responsible for over 20 percent of global warming emissions in the state. The Comprehensive Energy Plan requested by Governor Charlie Baker and conducted by the Executive Office of Energy and Environmental Affairs looked at several potential scenarios to meet the state’s climate limits for 2030. They found that in even the least aggressive scenario, electric vehicles will have to be 2 of 3 passenger vehicles sold in Massachusetts by 2030. In the most aggressive scenario electric vehicles are 7 of 8 new vehicles sold!

A program that requires consumers to wait months before they receive their rebate is inadequate.

Many states offer EV rebates at the point of purchase

In contrast, most states that offer rebates for electric vehicles do so at the point of purchase. Most also offer larger total rebate amounts. The Delaware Clean Vehicle Rebate program offers rebates of $3,500 for consumers who purchase through participating dealerships at the point of purchase. Auto dealers who participate in Connecticut’s CHEAPR program or New York’s Drive Clean Rebate, both of which offer $2,000 for a battery electric vehicle, likewise do all the paperwork behind the scenes, giving Connecticut and New York consumers an immediate incentive without any paperwork. Colorado’s alternative fuel tax credit of up to $5,000 for a battery electric vehicle can be claimed by financing institutions at the point of purchase. New Jersey exempts EVs from the state’s sales tax, which effectively provides thousands in savings at the point of purchase.

California does not offer rebates at the point of purchase, although the state is working on pilot projects to preapprove income-eligible EV purchasers. However, California does offer much larger incentives for low- and moderate-income residents. California’s Clean Vehicle Rebate Program offers a rebate of up to $4,500 for the purchase or lease of a battery electric vehicle to low-income consumers statewide. People who live in the San Joaquin Valley or within the South Coast Air Quality Management District are further eligible for incentives to trade in an older, high-emissions car or truck for an electric vehicle or hybrid; taken together, these incentives “stack” to up to $14,000 for low income consumers. California is also exploring providing financing assistance to low income consumers.

Data from the Center for Sustainable Energy confirms that states such as New York and Connecticut that have introduced rebates at the point of sale do significantly better in stimulating the market for low- and moderate-income customers than Massachusetts.

Mass Save for vehicles

Making electric vehicle rebates available at the point of sale is one particularly obvious step towards bringing this technology to all consumers. But we need to figure out a larger and more comprehensive approach to vehicle electrification. The decision to purchase an electric vehicle can be complicated. It requires the consumer to consider a number of issues from long-term cost savings to charging infrastructure to access to offstreet parking. We need a program that will address multiple obstacles to vehicle electrification and help the consumer through the process of understanding this technology and making a purchase.

We have a great model for how to do that in the Bay State. It’s called the Mass Save program.

Thanks to Mass Save, all Massachusetts residents can enjoy a free Home Energy Assessment. As part of that assessment, a person comes to your house, explains what your options are, explains what incentives and programs are available to support you. Mass Save also combines direct, upfront rebates with financing assistance, offering zero-interest loans for technologies such as heat pumps, insulated windows, and solar water heaters. Several programs provide greater incentives to low-income residents – or   provide efficiency technologies for free to low-income residents. Mass Save is a big part of the reason why Massachusetts has been consistently rated the most energy efficient state in the country, saving consumers hundreds of millions per year on their energy bills.

Mass Save is an awesome program because Massachusetts has devoted real resources to Mass Save from multiple dedicated funding streams. Massachusetts’ Three Year Energy Efficiency Plan calls for $772 million in energy efficiency funding through Mass Save in 2019. Currently MOR-EV has a 2019 budget of $8 million, which is projected to last the state through June. Nobody knows how the state will fund EV incentives in July. It is very difficult to build a bold or comprehensive program that addresses multiple barriers to EV adoption when MOR-EV is constantly on the verge of running out of money.

We need to do better than this, and we can. Representative Hecht’s budget amendment would represent a good step towards making MOR-EV a program that works for all consumers. We encourage the legislature to work with the Baker administration to make point-of-sale rebates for low- and moderate-income customers a priority, and to provide the kind of sustainable funding source that can allow our EV programs to reach a lot more consumers.

Grendelkhan/Wikimedia Commons

SNAP Rule Change Would Disproportionately Affect Trump Country

UCS Blog - The Equation (text only) -

Photo: Frank Boston/Flickr/CC by SA 2.0

USDA Secretary Sonny Perdue has signaled he may be having second thoughts about a proposed rule that could force 755,000 work-ready adults off the Supplemental Nutrition Assistance Program (SNAP). The rule, which would restrict states’ ability to waive benefit time limits for adults struggling to find work, has faced substantial backlash since it was announced in late December.

Last week, at a House Agriculture Appropriations Subcommittee hearing, representatives raised concerns about the diverse population of adults the SNAP rule would affect, which includes veterans, homeless individuals, college students, young adults aging out of foster care, those with undiagnosed mental and physical ailments, and those not designated as caregivers, but who have informal caregiving roles. These individuals can be characterized as “able-bodied adults without dependents,” even though many face significant barriers to employment. The conversation prompted Secretary Perdue to respond that this definition “may need some fine-tuning.”

Indeed.

But is Secretary Perdue’s statement also a response to the dawning reality that many of the people and communities who would be affected by the rule change are the same who helped elect President Trump to office? If it’s not—maybe it should be.

SNAP proposal would disproportionately hurt counties that voted for Trump

I’ve written previously about how the administration’s proposed changes to SNAP would make it harder for unemployed and underemployed adults to put food on the table—and why that’s bad policy for all of us. According to new UCS analysis, the proposed rule would cause 77 percent of all US counties currently using waivers to lose them—that’s a total of 664 counties from coast to coast. And my colleagues and I have crunched the numbers to show who would be hurt most. Layering data from 2016 election results, we found that more than three-quarters of counties that would lose waivers went to then-candidate Trump in the presidential election. In total, that’s more than 500 counties (and over half of them rural) that put their faith in a president who promised to bring prosperity to every corner of the country, and isn’t delivering.

While the administration has boasted of low unemployment rates and high job creation during its tenure, these national figures belie the persistent need that still plagues an overwhelming number of communities. Since the 2008 recession, labor force participation has dropped, wages have remained stagnant, and hunger remains widespread: food insecurity rates in 2017 were still higher than pre-recession levels. Relying on unemployment data alone to determine whether states can receive waivers—particularly at the threshold specific in the rule—ignores critical considerations about what’s actually happening in communities, and why states are best suited to assess their populations’ needs.

Below are snapshots of three counties from around the country that would lose waivers under the proposed SNAP rule. Although each is unique, they are all difficult places to find stable employment—and they all voted for President Trump in 2016.

  • Murray County, Georgia, a mostly rural area located on the state’s northwest border, had a population of 39,358 in 2016. For this mostly white county (83.7 percent in 2016), the 24-month unemployment rate between 2016 and 2017 was 6.8 percent, a rate nearly three percentage points higher than the national rate and a poverty rate of 18.8 percent, which is 34 percent higher than the US poverty rate. Manufacturing employed the largest share of workers in the county (38.5 percent), and recent reports indicate that Murray County’s unemployment has ticked up slightly, even though Georgia’s urban areas are seeing job growth.
  • Trumbull County, Ohio, is on the eastern border of the state, with a population of roughly 200,000 and a 24-month average unemployment rate of 6.8 percent from 2016 to 2017 and a poverty rate of 17.5 percent. Just over one in five workers here are employed in manufacturing. In fall 2018, GM announced that it would close its Lordstown assembly plant in Warren, OH.
  • Butte County, California, is a mostly urban county with roughly 220,000 residents in 2016. The county is home to a diverse set of organizations and businesses, including California State University Chico, United Healthcare, and Pacific Coast Producers (a cooperatively owned cannery, owned by over 160 family-farms in Central and Northern California), to name a few. Butte is also home to Paradise, a town severely impacted by the Camp fire that occurred in 2018. The average unemployment rate in Butte County was 6.5 percent for the most recent 24-month period and 3 percent of the population lived in poverty in 2017.

Although the comment period for the proposed SNAP rule closed on April 10, Secretary Perdue’s comments—and continued debate among lawmakers—suggest that the issue may not yet be settled. For hundreds of thousands of adults and the communities they live in, that’s a good sign.

Photo: Frank Boston/Flickr/CC by SA 2.0

Science and Transparency: Harms to the Public Interest from Harassing Public Records Requests

UCS Blog - The Equation (text only) -

Photo: Bishnu Sarangi/Pixabay.

In my work as a professor and researcher in the Microbiology and Environmental Toxicology Department at the University of California, Santa Cruz, I investigate the basic mechanisms underlying how exposure to toxic metals contribute to cellular effects and disease. My lab explores how exposures to environmental toxins, such as lead, manganese, and arsenic can cause or contribute to the development of diseases in humans. For example, some neurobehavioral and neurodegenerative disorders, such as learning deficits and Parkinsonism have been linked to elevated lead and manganese exposures in children and manganese exposures in adults, respectively.

California condor in flight. Lead poisoning was a significant factor precluding the recovery of wild condors in California.

In my career spanning 25 years, I helped develop and apply a scientific method to identify environmental sources of the toxic metal lead in exposure and lead poisoning cases in children and wildlife. I helped develop laboratory methods for evaluating tissue samples, including a “fingerprinting” technique based on the stable lead isotope ratios found in different sources of lead that enables the matching of lead in blood samples to the source of the lead exposure.

In the early 2000s, I collaborated with graduate students, other research scientists, and several other organizations to investigate the sources of lead poisoning that was killing endangered California condors. Our research showed that a primary source of lead that was poisoning condors came from ingesting lead fragments in animals that had been shot with lead ammunition, and that this lead poisoning was a significant factor precluding the recovery of wild condors in California.

Our work provided important scientific evidence of the harm that lead ammunition causes on non-target wildlife, and it supported the passage of AB 821 in 2007 and AB 711 in 2013, which led to partial and full bans on the use of lead ammunition for hunting in California.

Gun lobby attempts to discredit research

Because of our research, I and other collaborators received five public records requests under the California Public Records Act (CPRA) between December 2010 and  June 2013 from the law firm representing the California Rifle and Pistol Association Foundation seeking, in summary: all writings, electronic and written correspondence, analytical data, including raw data related to my research on lead in the environment and animals spanning a six year period. The very broad records requests asked for any and all correspondence and materials that contained the word “lead,” “blood,” “isotope,” “Condor,” “ammunition,” or “bullet.”  The request essentially sought everything I had done on lead research for this time period.

One seeming goal of the requestors was to discredit our findings and our reputations, as made apparent on a pro-hunting website that attempted to discredit our peer-reviewed and published findings. We initially responded that we would not release data and correspondence relating to unpublished research, because of our concern that we would lose control of the data and risk having it and our preliminary findings be published by others. As a result, the California Rifle and Pistol Association Foundation sued us in California Superior Court.  Ultimately, the court ruled in favor of the university and researchers by narrowing the scope of the CPRA requests, and limiting the requests to published studies and the underlying data cited.

Impacts and harms from overly broad public records requests

These very broad public records requests have had a significant impact on my ability to fulfill my research and teaching duties as a faculty member at University of California, Santa Cruz. I personally have spent nearly 200 hours searching documents and electronic files for responsive materials; meeting with university counsel and staff; preparing and sitting for depositions, court hearings, and giving testimony. Our efforts to provide responsive materials are ongoing.

Overly broad public records requests deprive the public of the benefits that such research can bring, such as helping wildlife and endangered species (such as the California Condor) survive and thrive by removing sources of environmental lead contamination.

While these requests have had a personal and professional impact on me as an individual, they have caused broader harms to the university’s mission of teaching and production of innovative research that benefits students, California residents, and the public at large. Impacts include:

  • Interfering with my ability to pursue research funding, conduct research, analyze data, and publish my research because of the time required to search and provide responsive materials that takes away from time invested in other duties.
  • Squelching scientific inquiry, and research communications and collaborations with colleagues or potential colleagues at other research institutions.

By chilling research and discouraging graduate students and collaborators from pursuing investigations into topics that could put them at odds with powerful interests, these types of expansive records requests deprive the public of the benefits that such research can bring, such as helping wildlife and endangered species survive and thrive by removing sources of environmental lead contamination.

Why I support modernizing the California Public Records Act

I chose to testify in front of the California Assembly Committee on the Judiciary in support of AB 700 and the effort to modernize the California Public Records Act to protect the freedom to research and to help  streamline the ability of California public universities to process and manage public records requests. This bill establishes very narrow exceptions for researchers to protect unpublished data and some peer correspondence, which would help prevent task diversion, reputational damage, and encourage inquiry and knowledge production at public universities across the state. AB 700 would also reduce the serious burden from expansive and overly-broad records requests on researchers and on the courts and the long backlog of records requests. I think this bill strikes the right balance between public transparency and privacy for research. Ultimately, the public will be better served if the state provides more clarity about what information should be disclosable under the California Public Records Act.

 

Donald Smith is Professor of Microbiology and Environmental Toxicology at the University of California, Santa Cruz. He received his PhD in 1991 and he joined the faculty at UC Santa Cruz in 1996. He has over 20 years experience and published over 100 peer-reviewed articles in environmental health research, with an emphasis on exposures and neurotoxicology of environmental agents, including the introduction, transport and fate of metals and natural toxins in the environment, exposure pathways to susceptible populations, and the neuromolecular mechanisms underlying neurotoxicity.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Photo: Gavin Emmons Photo: Donald Smith

Clean Energy’s Progress, in One Simple, Uplifting Graphic

UCS Blog - The Equation (text only) -

Photo: AWEA

Lea en español

News about global warming can be sobering stuff, and some visual presentations are particularly effective at conveying the bad news. As serious as climate change is, though, it’s important to remember that we have some serious responses. A new way of looking at US wind and solar progress helps make that eminently clear.

Sobering graphic

If you, like me, find yourself at times swinging between a sense of the challenge of climate change on the one hand, and the excitement of clean energy on the other, you might appreciate the need for balance and perspective.

The progression of our effects on the global climate are captured powerfully (and frighteningly) in a viral graphic from UK climate scientist Ed Hawkins that shows variations in global temperatures since 1850. While it varies by month and year, the trend shown in the GIF is really (really) clear: The passing years have brought higher and higher temperatures.

Serious, sobering stuff, given all that comes with that global warming.

So it seems like we need things to counterbalance graphics like that, at least in part—not to take the pressure off, but to remind ourselves of where we’re making important progress, and laying the groundwork for a whole lot more.

Graphical remedy

One option is to take a look at what’s going on with clean energy in the power sector—and wind and solar, in particular, which have been marvels to behold in recent years.

A new graphic does just that, looking at the shared contribution of wind and solar to our nation’s electricity generation, in much the same way as the Hawkins graphic does: month in and month out, as the years roll by. Here it is:

The graphic, from the Union of Concerned Scientists, draws on electric power sector data from the US Department of Energy’s Energy Information Administration (EIA), and includes wind power, large-scale solar, and (importantly, given that it too often gets ignored) the increasingly significant contribution from small-scale/rooftop solar.

And this little GIF has a lot to say. It begins with wind and solar’s humble status early last decade, when wind barely registered, and solar wasn’t a factor at all. From there the spiral sure picks up steam, as each year has brought online more wind turbines (now 58,000 and climbing) and more solar panels (on nearly 2 million American rooftops, and far beyond).

On a monthly basis, the contribution of wind and solar has shot past 3% (2010), past 6% (early 2013), past 12% (April 2018)—where every additional 1% is the equivalent of more than 4 million typical US households’ electricity consumption. And on an annual basis, that progress has translated into the electricity contribution from just those two technologies going from 1 in every 71 kilowatt-hours in 2008 to 1 in every 11 in 2018.

And the graphic clearly conveys the momentum poised to carry solar and wind far beyond. There’s a lot more progress coming, it declares—clean energy milestones to be watching out for (and making happen).

Credit: J. Rogers/UCS

Why it matters

To be clear, the new graphic and all that it represents shouldn’t cause us to lose sight of what really matters: from a climate perspective, what’s happening to overall carbon emissions, and the resulting temperature changes. It’ll take a lot more clean energy—a lot less fossil energy—in our electricity mix to help us deal with climate change.

But the progress on clean energy is really important because of the power sector’s still-substantial contributions to our carbon pollution, and the need for a lot more action. And that progress also matters because the power sector is crucial for cutting carbon pollution from other sectors, through electrification of stuff like transportation (think electric vehicles) and home heating (heat pumps!).

That’s why keeping our eyes on stats like these is key: We need to celebrate the progress we’re making, even as we push for so much more.

Sartorial solar splendor on its way?

Meanwhile, it turns out that the Hawkins graphic in stripe form has gone on to become the basis for a line of must-have clothing and more.

We can hope that the good news about the progress of US solar and wind becomes just as desirable a fashion accessory.

Photo: AWEA Photo: Dennis Schroeder / NREL

El progreso de la energía limpia, en un gráfico sencillo e inspirador

UCS Blog - The Equation (text only) -

Photo: AWEA

Read in English

Las noticias sobre el calentamiento global pueden ser alarmantes, y algunas presentaciones visuales son particularmente efectivas para transmitir las malas noticias. A pesar de lo serio que es el cambio climático, sin embargo, es importante recordar que tenemos respuestas serias. Un nuevo gráfico sobre el progreso de la energía eólica y solar en los Estados Unidos ayuda a evidenciarlas claramente.

Un gráfico sombrío

Si tú, como yo, te encuentras a veces alternando entre un sentido realista del desafío que representa el cambio climático por un lado, y la emoción del progreso de la energía limpia por el otro, talvez puedes apreciar la necesidad de equilibrio y perspectiva.

La evolución de nuestro impacto en el clima global fue capturada poderosamente (y aterradoramente) en un gráfico viral del científico climático británico Ed Hawkins. El gráfico muestra las variaciones en las temperaturas a nivel global desde 1850. Mientras varía por mes y año, la tendencia mostrada en el GIF es sumamente clara: El paso de los años ha traído temperaturas más y más elevadas. Algo muy serio, dado todo lo que viene con ese calentamiento global.

Dado eso, parece que necesitamos herramientas para contrarrestar esta clase de gráficos, por lo menos en parte. No para eliminar la presión que sentimos de actuar, sino para acordarnos de las áreas en que estamos logrando un progreso importante, y creando una base sólida para mucho más.

Remedio gráfico

Una opción es observar lo que está pasando con la energía limpia, especialmente con las energías eólica (del viento) y solar, que han estado progresando de forma impresionante en los últimos años.

Un nuevo gráfico hace justamente eso, viendo la contribución que han hecho las turbinas eólicas y los paneles solares a la generación de electricidad en los EE. UU., de forma parecida al gráfico Hawkins: mes tras mes, con el paso de los años. Aquí está:

El gráfico, elaborado por la Union of Concerned Scientists, representa datos del Departamento de Energía de los EE. UU. e incluye la energía eólica, la energía solar a gran escala y también la energía solar a pequeña escala (lo cual es importante dado que muchas veces es ignorada).

Y ese pequeño GIF tiene mucho que decir. Empieza con el estado humilde en que se encontraban las energías eólica y solar al principio de la década pasada, cuando la energía eólica apenas si se podía reconocer en las cifras y el efecto de la energía solar no era distinguible. De allí la espiral se acelera, con la conexión cada año de más turbinas eólicas (ahora 58,000 y creciendo) y más paneles solares (en casi 2 millones de techos estadounidenses, y mucho más allá).

Desde el punto de vista de las cifras mensuales, la contribución de las dos tecnologías llegó al 3% en 2010, al 6% en la primavera del 2013 y al 12% en abril de 2018, con cada 1% adicional equivalente al consumo eléctrico de más de 4 millones de hogares típicos estadounidenses. Y desde el punto de vista anual, ese progreso se ha traducido en que su contribución a la matriz de generación eléctrica pase de 1 de cada 71 electrones en 2008 a 1 de cada 11 en 2018.

Y el gráfico transmite claramente el momento que está listo para llevar a la solar y la eólica mucho más allá. Hay mucho más progreso en ruta, declara, representado en hitos de energía limpia que esperamos (y que nos esperan a nosotros para llevarlos a cabo).

Credit: J. Rogers/UCS

¿Por qué es importante?

Este nuevo gráfico y todo lo que representa no debe hacernos perder de vista lo que realmente nos importa del punto de vista del clima: lo que está pasando con las emisiones de dióxido de carbono (CO2), y el calentamiento global. Vamos a necesitar mucha más energía limpia y mucha menos energía fósil en nuestra matriz eléctrica para ayudarnos a enfrentar el cambio climático.

Pero el progreso que estamos logrando con la energía limpia es muy importante dada la contaminación por CO2 por la cual el sector eléctrico sigue siendo responsable, y la necesidad que tenemos de mucha más acción. Y ese progreso es aún más importante porque el sector eléctrico es crucial para lograr reducir las emisiones de CO2 también en otros sectores a través de la electrificación del transporte (los vehículos eléctricos), por ejemplo, y la calefacción (las bombas de calor).

Es por eso que es clave seguir prestando atención a cifras como éstas: Tenemos que celebrar el progreso que estamos logrando, mientras empujamos a la vez para lograr mucho más.

La ropa que provoca

Mientras tanto, resulta que el gráfico Hawkins en forma de rayos se ha convertido también en la base de una variedad de ropa y otros accesorios “imprescindibles”.

Esperemos entonces que las noticias sobre el progreso de las energías eólica y solar se conviertan también en un deseado accesorio de moda.

Photo: AWEA Photo: Dennis Schroeder / NREL

California’s Wildfire Costs are Just the Tip of the Iceberg

UCS Blog - The Equation (text only) -

Photo: NASA

As California’s electric utilities grapple with the aftermath of record-breaking wildfires, the potential impact on customer bills is starting to come into focus. While it is still unclear who will end up paying for wildfire damages, one thing is clear: extreme wildfires are here to stay, and they will likely keep getting worse. With climate change increasing not only the risk of wildfires, but also threatening many other economic and human health impacts, the costs of preventing extreme climate change pale in comparison to the costs of inaction.

Wildfires in California

To cover the costs of only the 2017-2018 wildfires, one estimate indicates that residential utility bills for customers of the state’s largest utility, Pacific Gas and Electric (PG&E), would need to increase by $300 annually. However, another estimate indicates that, if wildfires in California continue to inflict as much damage as they have over the past two years, PG&E bills would need to double to cover the recurring costs, while bills for electricity customers across all of California would need to increase by 50%. Unfortunately, the last two years of wildfires have not just been an extraordinary fluke.

Over the past few decades in the Western US, the number of large wildfires has been rising and the fire season has been getting longer. While there are multiple factors driving these changes, climate change is increasing the risk of wildfires. As climate change drives up temperatures and changes precipitation patterns, California can expect more frequent wildfires and more acres burned in the future.

Costs of climate change inaction

But the costs of climate change will not just show up in higher electricity bills.

A recent report from scientists at the Environmental Protection Agency calculated the costs of climate change by the end of the century under different scenarios. While the report found that climate change will cost the US economy hundreds of billions of dollars annually, it also showed that a slow response to climate change, or worse, inaction, will cost us far more in dollars, property losses, public health and human lives.

If we limit global warming to two degrees Celsius, tens of billions of dollars in damages could be avoided every year by the end of the century – which works out to savings of $250 to $600 per person per year. This just goes to show how costly it will be not to address climate change.

The US economy can avoid billions of dollars in damages by reducing global warming emissions to stay on a lower-emissions trajectory. Figures are from the Fourth National Climate Assessment.

A vicious cycle

This brings us back to PG&E, which is grappling with massive wildfire costs. If these costs end up being passed on to electricity customers, it could ultimately hinder California’s ability to prevent further climate change. If electricity prices go up significantly, people who own electric vehicles or have all-electric homes will face much higher costs. Since vehicle and building electrification are key components of California’s strategy to reduce global warming emissions, substantially higher electricity costs would disincentivize electrification and make emissions reductions more difficult to achieve.

There is vicious cycle at play here:

  • Climate change is increasing the risk of wildfires.
  • Wildfire costs might increase the cost of electricity.
  • Higher electricity prices would disincentivize electrification, which is one of California’s main tools for preventing climate change.
  • Maintaining or, even worse, increasing, our global warming emissions trajectory will lead to more climate change impacts, such as extreme wildfires.

In short, climate change may make it more difficult for California to prevent climate change.

You have to spend money to save money

At the end of the day, this problem is not going to solve itself. We will need to make all sorts of investments to prevent further climate change and to adapt to the climate change we have already locked in.

Encouragingly, the governor of California is taking climate change prevention and adaptation very seriously. The governor’s office recently released a report that details a wide array of policy options meant to address the climate change and wildfire problems faced by California’s electric utilities.

While some of those policy changes will no doubt be necessary, California also needs to continue investing heavily in solutions that we know are necessary for the transition to a clean energy economy. Renewable energy, electric vehicles, energy efficiency, and many more solutions are critical to the state’s emissions reduction goals, and California needs to continue making these investments even in the face of expensive disasters exacerbated by climate change.

These investments will not just be out of the goodness of our hearts. With hundreds of billions of dollars in climate-change-caused damages on the line, putting money into climate change prevention is a wise investment.

Photo: NASA

What to Expect When You’re Expecting the 2020-2025 Dietary Guidelines

UCS Blog - The Equation (text only) -

Photo: Peter Merholz/Flickr

Pregnancy Advice: Caffeine’s ok. Some caffeine is ok. No caffeine.

Breastfeeding Advice: Start solids at 4 months. Start solids at 6 months. Exclusively breastfeed for one year.

First Foods Advice: Homemade baby food. Store-bought baby food. Spoon feeding. Baby-led weaning.

My experience of being pregnant and having a baby in modern times has meant getting conflicting advice from the different sources I consulted, specifically surrounding nutrition. Depending on the google search or midwife I spoke to, I heard different daily amounts of caffeine suitable while pregnant. Depending on the lactation consultant that popped into my hospital room, I heard different levels of concern about the amount I was feeding my newborn. And now that I’m about to start solid foods with my six-month old, I have heard conflicting information about when, how, and what to start feeding my child. How is it so difficult to find what the body of evidence says about these simple questions that parents have had since the dawn of time? When I discovered that past editions of the Dietary Guidelines didn’t address the critical population of pregnant women and infants from birth to two years, I wondered how it was possible that there was this huge gap in knowledge and guidance for such an important developmental stage. That’s why I’m very excited that the Dietary Guidelines Advisory Committee (DGAC) will be examining scientific questions specific to this population that will inform the 2020-2025 Dietary Guidelines and have recently begun that process.

In the meantime, I will be starting my daughter on solids this week and have been trying to find science-supported best practices. It has been shockingly hard to navigate and I became reminded of the interesting world of the baby food industry that I became acquainted with as I researched and wrote about added sugar guidelines specifically for the 2016 UCS report, Hooked for Life.

The history of baby food and nutrition guidelines

Amy Bentley’s Inventing Baby Food, explains that the baby and children’s food market as we know it today is a fairly new construction, stemming from the gradual industrialization of the food system throughout the last century. Early on in the history of baby food marketing, a strong emphasis was placed on convincing parents and the medical community of the healthfulness of baby food through far-reaching ad campaigns and industry-funded research. The Gerber family began making canned, pureed fruits and vegetables for babies in 1926 and in 1940 began to focus entirely on baby foods. During this time, it was considered a new practice to introduce solid foods to babies before one year. In order to convince moms of the wholesomeness of its products, Gerber commissioned research touting the health benefits of canned baby foods in the Journal of the American Dietetic Association (ADA) and the company launched advertising campaigns in the Journal and women’s magazines. Quickly, Gerber’s popularity and aggressive marketing campaign correlated with the decrease in age of earlier introduction of solid foods as a supplement to breast milk. Earlier introduction of foods meant an expansion of baby food market share, which meant big sales for Gerber.

All the while, there were no federal guidelines issued for infants. Gerber took advantage of this gap in 1990 when they released their own booklet, Dietary Guidelines for Infants, which glossed over the impacts of sugar consumption, for example, by telling readers that, “Sugar is OK, but in moderation…A Food & Drug Administration study found that sugar has not been shown to cause hyperactivity, diabetes, obesity or heart disease. But tooth disease can be a problem.” The FDA study that Gerber refers to was heavily influenced by industry sponsorship, and the chair of the study later went on to work at the Corn Refiner’s Association, a trade group representing the interests of high-fructose corn syrup manufacturers. In fact, evidence has since linked excessive added sugar consumption with incidence of chronic disease including diabetes, cardiovascular disease, and obesity.

Today, the American Academy of Pediatrics (AAP), World Health Organization, and the American Academy of Family Physicians all recommend exclusive breastfeeding until six months using infant formulas to supplement if necessary. AAP suggests that complementary foods are introduced around 4 to 6 months with continued breastfeeding until one year. But what foods, how much, and when is a little harder to parse out. Children’s food preferences are predicted by early intake patterns but can change with learning and exposure, and flavors from maternal diet influence a baby’s senses and early life experiences. There’s research that shows that early exposure to a range of foods and textures is associated with their acceptance later on. And of course, not all babies and families are alike and that’s okay! There are differences related to cultural norms in the timing of introduction of food and the types of food eaten. Infants are very adaptable and can handle different ways of feeding.

There’s a lot of science out there to wade through, but it is not available in an easy-to-understand format from an independent and reliable government source. That’s what the 2020 Dietary Guidelines have to offer.

2020-2025 Dietary Guidelines: What to expect

The Dietary Guidelines for Americans is the gold standard for nutrition advice in the United States and is statutorily required to be released every five years by the Department of Human Health Services (HHS) and the U.S. Department of Agriculture (USDA). These guidelines provide us with recommendations for achieving a healthy eating pattern based on the “preponderance of the scientific and medical knowledge which is current at the time the report is prepared.” Historically, the recommendations have been meant for adults and children two years and older and have not focused on infants through age one and pregnant women as a subset of the population.

The freshly chartered DGAC will be charged with examining scientific questions relating to the diets of the general child and adult population, but also about nutrition for pregnant women and infants that will be hugely beneficial to all moms, dads, and caregivers out there looking for answers.

Credit: USDA

While I was pregnant, my daughter was in the lower percentile for weight and I was told by one doctor to increase my protein intake and another that that wouldn’t matter. I would have loved to know with some degree of certainty whether there was any relationship between what I was or wasn’t eating and her growth. One of the questions to be considered by DGAC is the relationship between dietary patterns during pregnancy and gestational weight gain. I also wonder about the relationship between my diet while breastfeeding and whether there’s anything I should absolutely be eating to give my daughter all the nutrients she needs to meet her developmental milestones. DGAC will be looking at that question (for both breastmilk and formula) as well as whether and how diet during pregnancy and while nursing impact the child’s risk of food allergies. The committee will also be evaluating the evidence on complementary feeding and whether the timing, types of, or amounts of food have an impact on the child’s growth, development, or allergy risk.

At the first DGAC meeting on March 29-3o, the USDA, HHS, and DGAC acknowledged that there are still limits in evaluating the science on these populations due to a smaller body of research. Unbelievably, there’s still so much we don’t know about breast milk and lactation, and in addition to government and academic scholarship, there are really interesting mom-led research projects emerging to fill that gap.

The Dietary Guidelines are not just useful for personal meal planning and diet decisions but they also feed directly into the types of food made available as a part of the USDA programs that feed pregnant women and infants, like Supplemental Nutrition Assistance Program (SNAP); Special Supplemental Nutrition Program for Women, Infants and Children (WIC); and the Child and Adult Care Food Program (CACFP). Having guidelines for infants on sugar intake in line with the American Heart Association’s recommendation of no added sugar for children under two years old, would mean some changes to the types of foods offered as a part of these programs.

Nutrition guidelines will be a tool in the parent toolbelt

 But if there’s one thing I’ve learned as I’ve researched and written about this issue and now lived it, is that while the scientific evidence is critical, there are a whole lot of other factors that inform decisions about how we care for our children. Guidelines are after all just that. As long as babies are fed and loved, they’ll be okay. What the guidelines are here to help us figure out is how we might be able to make decisions about their nutrition that will set them up to be as healthy as possible. And what parent wouldn’t want the tools to do that?

As I wait anxiously for the report of the DGAC to come out next year, I will do what all parents and caregivers have done before me which is do the best I can. I have amazing resources at my disposal in my pediatrician, all the moms and parents I know, and local breastfeeding organizations. Whether my daughter’s first food ends up being rice cereal, pureed banana, or chunks of avocado, it is guaranteed to be messy, emotional, and the most fun ever, just like everything else that comes with parenthood.

Photo: Peter Merholz/Flickr

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs