Ungulates, Overlords, and Uprisings: Artificial Intelligence Unleashed

Published Sep 19, 2023

Jess talks with artificial intelligence experts Chris Mattmann of NASA’s Jet Propulsion Laboratory and Meriem Bekka of vmWare about the technological, social, and environmental aspects of our A.I. future.


We’re already living in the future. We have technology that can scan our internal organs for disease without so much as a tiny incision, cars that drive themselves, and almost the entire sum of human knowledge at our fingertips, literally…since the powerful machines we use to access that information fit in our palms and can cost as little as two hours of work at minimum wage in a major city.

Whether we like it or not, the voracious march of progress has shoved us headlong into the digital millenium. As with all major advancements in technology, new abilities raise a host of important questions. Who can use it, who can profit from it, and who is in charge of regulating its use are a few of the more pressing ones.

If you’ve been at all connected to any form of media in the last year, you’ve heard news about advancements in artificial intelligence, or A.I. technology. The public introduction of ChatGPT, the A.I.-powered natural language processing tool made headlines globally. It’s a chatbot accessible for free online whose functions include executing tasks like scheduling and essay writing, and even having conversations with its human users. A.I.-driven art programs like Midjourney and DALL-E can create art ranging from copies of masterpieces to surreal pastiches like my personal favorite: babies skydiving. Yes, it’s every bit as “uncomfortable giggle-inducing” as you’d think.

Since the A.I. genie is out of the lamp, we need to talk critically about the important questions of access to, profits from, and regulation of artificial intelligence…and the very real consequences its use is having on people and our planet.

I’m your host Jess Phoenix, and this is science.

Jess: I am so excited to talk with Dr. Chris Mattmann, the chief technology and innovation officer, and division manager for the artificial intelligence, analytics and innovation development organization in the Information Technology and Solutions Directorate at NASA's Jet Propulsion Laboratory. Now, that is probably the longest title I will ever say, but it's also one of the coolest. So in addition to all of that, Chris is also an internationally recognized expert in artificial intelligence, which is why I wanted to tap his knowledge for today's show. Chris, thanks so much for taking the time to talk with me and help level up our understanding of AI.

Dr. Mattmann: Jess, I'll do my best for you and your audience. You're a brilliant individual yourself. It's a pleasure to be on your show.

Jess: Ah, you're too kind. Well, I'd first like to ask if you could just give us a brief overview of the term artificial intelligence, what it means, where it originated, and kind of the state of current play for the technology.

Dr. Mattmann: Yeah, yeah. Artificial intelligence has a number of definitions. Like the first type, kind of traditionally is computers percepting and thinking like we do as humans. So percepting is interacting with the environment, getting observations and then deciding what to do with them, whether or not to look at something and classify it or label it as something, whether to look at bunches of things and sorting it, whether to decide to build something. And it's the computer, artificial intelligence is the computer doing that independent of humans and thinks on its own. For many, many, many decades, artificial intelligence could not realize those goals because it lacked the computing capacity or even the constructs and the models to achieve it. In other words, it was just on paper. When I took artificial intelligence at USC in 1998, it was basically a class where we wrote out equations and, you know, truth equations and bullions and things, and they said, "Wow, wouldn't this be really cool if we could ever implement those things, but we can't. We don't have the computers for it." And then that changed, so there you go.

Jess: Wow. So, a lot of strides in our lifetimes, which is pretty exciting when you think about it. So, I'm gonna hazard a guess that most of our listeners have heard of ChatGPT. We have a pretty well educated audience, and many of them have probably played around with the commercially available AI image generators or writing tools. So, I'd like to go a little bit deeper. What is one potential application of AI that gets you as an expert in the field, out of bed every day?

Dr. Mattmann: You know, for many years, Jess, I'll tell you, it's the computer vision aspects of AI. It's not even as much the text. I've been a long time kind of text guy and worked in that domain for a lot of times, but a lot of it is the computer vision elements. And let me just rewind the clock for you and your listeners. You know, I tell a lot of people, "Hey look, what's changed?" Over the last two decades, what's changed is remember my definition of AI that I just gave you. Wouldn't it be great if we had the computing power for those computers to basically end the data and to basically make the computers behave on their own, perceive and all that? Well, 20 years ago we got that through the modern big data frameworks, things like Hadoop, things like Spark, Anaconda Python, and Python. They gave us the capability to kind of agglomerate, organize data and to process it at scale, which were all of the things that we needed on top of commodity computers and other things to build AI models. And to realize the things we only wrote about in books that over the last 10 years, the things that's happened is we have AI models that are actually on top of those computing platforms, that are better than human perception for all of the basic human senses. And so, let's start with vision. So that Jeffrey Hinton guy that just left Google and is now basically saying, "Hey, AI is dangerous and all of that," he should know because his student, Alex, Alexander, came up with AlexNet. AlexNet in 2012 is better than human perception for computer vision, for object recognition, for image labeling, for video labeling. So in 2012, we've had a machine learning model that could perform better than humans, and it's the foundation of modern smart vehicles, LiDAR data and Teslas or, you know, EV cars and things like that. In 2014, we had a different sense pop along. It was Baidu's deep speech model. And, you know, I don't wanna set anyone's thing off, I apologize to listeners or even you at home, Jess, but the Amazon Alexa or the Googles or things like that, all of those intelligent assistants require hearing, hearing sound and translating it into text and commands and things like that. That was Baidu's deep speech model. So Baidu's a Chinese search engine company and things like that. 2014, we had a better than human ear in a machine learning model or an AI model for that. In 2018, to your point about your listeners and ChatGPT, we had BERT from Google, which was bi-directional entity representation transformers, which is the foundation of modern natural language conversation. I talk to you in text and you respond to me with coherent text and other things back. That came from 2018. So over the last decade, on top of two decades ago, work on modern computing, we now have modern senses that are AI driven, and that's what gets me up in the morning. Like, I just saw Jeffrey Dean at Google. He just put out a tweet and he basically, now we have the nose. I always joke with my wife Lisa, I said, "Hey, someone needs to do an AI model for smelling." It's done. Google did it, and we're gonna have ones for touch, and at that point, now the machines can do it. Jess: Okay. That's funny that you mentioned the nose because that was where I was gonna go. I was like, "Okay, so can computers out smell us now?" but clearly you have people that are already on top of that. So, that kind of blows my mind and I have the Google on my phone assistant. That's the only one that we have, and every morning I tell it, "Okay Google, please don't wake up right now." I'm looking at my phone, and then I ask it, you know, I tell it good morning and then it plays the news for me and tells me what I have to do that day, and I never thought about that. Like, I know it's AI in the back of my head, but I didn't realize it was based on technology that we've had around for a few years now. It seemed like such a new thing because it's only been a year or two that I've had that ability. So, that's super cool. So now of course, I have to do the, "I'm not a computer person thing," so I wanna know, you know, obviously Skynet, Terminator, they laid it all out for us this could go horribly wrong, so it sort of spirals into the regulating AI conversation. And I've heard a bit of discussion around that, and I just was wondering if you think the technology is going to coalesce around a standard that will enable easier and more effective government regulation, or do you think the landscape's gonna stay diverse enough that regulation is gonna be a trickier prospect?

Dr. Mattmann: Yeah. It's probably somewhere in the middle of that, Jess. The challenges that we have right now is that the EU is leading a lot of the regulatory efforts as it relates to components of AI. So, let's talk about data. You know, the EU led forward in GDPR which are data privacy protection rights and the ability to kind of remove yourself from these digital platforms online and all of that. And then, companies in the U.S., a lot of them do try to follow GDPR, things like that. Well, there's something now called the Digital Act in which they're defining more privacy rights in the form of what data can be collected. Well, why does this matter for AI? It matters because Andrew Yang during the 2020 campaign, I love this saying, he basically says, hey, you know, he says, "Data is the new oil." And the reason that that's really important is, you know, oil, in order to power our gas-powered vehicles, needs to go from crude when they drill it to a refinement pipeline to 92 octane or whatever, you know? And so, those refinement pipelines are very important in the same way for AI. AI expects to see the world in terms of structured tables in which the columns and the tables are features that it's deciding on, and the rows in that table are the samples of basically the samples of those features that it's observing. But the world doesn't look like that. Just like we talked about, the world is messy. It's audio, it's phone calls, it's video, it's things like that. And so because of that, the thing I would just sort of relate is that there's a lot of work that has to go on in terms of regulating the data for AI in terms, and so that's one way and so the EU is leading in there. The second thing is, once you have the data for AI, once you have the data for it, you need to then regulate what is done with AI models. And the EU also through the AI Act in which they're getting ready to pass, and they're building support for, that's regulating the, okay, we've got the data and we're training AI models. Now, what's about it? So first there's bias in the data. These are kind of the foundations of modern AI ethics. There's bias in the data. So the first training data for machine learning and AI models for smart vehicles did not include enough people of color, and did not include enough disabled persons and things like that and so as such, we kind of want cars to stop, right? You know, when they see them, and so those data sets were bias. And so, we need to balance and remove the bias from the training data sets, because AI is only as smart as the data that we train it on. Second, when the AI makes predictions, a lot of times, you know, and the way I described this is with the weather. It's gonna rain tomorrow, and then [inaudible 00:09:45] it doesn't rain. We're in Southern California, that's great, but we're sitting there like, "Why didn't it rain?" That's because they never told you that there was only a 15% confidence that it was gonna rain, and every other decision sucked after that, but the best one was 15%, and that really wasn't very good. So, whenever we make a decision or whenever we have AI making a decision, we need to report the confidence for it. And then finally, all of this AI automation, it will displace jobs. You can think of a call center as one of the, you know, a lot of people talk about truckers and truckers on very perilous 18-hour routes, and there's probably, I'm not gonna get into the debates about that but that's certainly an area where there could be a displacement of workers. Take call centers as another one. Since 2018, Google has had a call center AI assistant that you could call up, will make appointments for you, that will make calls on your behalf, gather the data, like, make a appointment at the nail salon, make an appointment for your kid at the doctor, whatever, that exists. And so, in the next 24 months, what do we do with all those people? Tell them to learn to code? I don't think that's a good answer. I think we need upskilling programs. I think we need... Yeah, I think. So anyways, all of those are what the regulatory frameworks and the AI Act are doing. The U.S. is looking to follow, and a lot of the western countries, not necessarily the global south, they're kind of on their own and still developing these things. And then the other thing that's gonna happen is the strikes. The strikes, at the root of the strikes in Hollywood, the WGA strike, the writer strike, the strike related to the actors and AMPTP, at the heart of those is AI. It's people's likenesses, it's their data, it's how they can be used in future generative AI, and so look for the EU to influence that.

Jess: It seems like Europe kind of is the, at the vanguard of a lot of this regulatory work, and then the U.S. sort of stumbles along afterwards. It's EU seems to just be a little bit more coalition of the willing in a lot of this stuff. So, all right. So just twisting the topic just a tiny bit to science specifically. I can see a ton of applications in science and the easiest one that popped into my mind for geoscience, which is my bread and butter, would be analyzing seismic data from different either volcanic areas or from known tectonic plate where they meet up, and I think that that makes a lot of sense. There's a lot of data all the time with minor quakes happening. There's hundreds of them all the time. So, what are other ways that we're seeing AI change the face of science, scientific research, analyzing data? Like, what are some of the things that really jump out to you?

Dr. Mattmann: Well, a couple different ways. So, I'm actually really excited by the application of AI to different scientific problems in particular, like you said, seismic volcanology, you know, other things. Earth science remote sensing is one that's really near and dear to my heart, especially water in the Western United States. So, AI has the potential in a couple different areas. So first, a lot of times with science algorithms, we try and do build, we do two things. We build a full physical model of what we're trying to emulate or simulate, so we'll go out and we'll develop kind of a multi-parameter scientific model. We'll run it over, you know, tens of thousands to hundreds of thousands of runs over huge temporal or geological scales, and then sometimes we'll even do it locally or regionally. But all of that work you could sort of emulate with generative AI, you know, in certain cases, the parameterization of those models, the outcomes of those models. Heck, you could even use generative AI to emulate or interpolate between different runs of the model. So a lot of times what scientists like you will tell computer scientists like me is, "Hey, you ran it at this 100-year scale, that was great. It had a 10-year timestamp, or it had a year timestamp. I would really have loved to see that at a monthly or a diurnal or something timestamp," but we're running that again, requires us to go on the big computers and redo it, and we don't do it all the time, right? But with generative AI, we could interpolate that. We could guess in other words, or have AI run all the different scenarios and parameters in those models, and so at much smaller cost, right? For training the model and for producing the output. So, I'm really excited in those cases to allow us to do more exploratory analysis, question answering, science question, hypothesis development. The other is just like you were talking about, about data, on the massive data. And so, let's just take something that I'm safe in, my safe zone, water in the Western U.S. which I spent a great deal of time working with various scientists on. People still go stick sticks up in the snow to measure here in the California mountain ranges, how much, what the depth of snow is, because as it turns out, our water supply is very dependent on how much snow is melting and how much we're saving and the [inaudible 00:14:49] Reservoir and other places, and then letting out to different states and locales. First, that's perilous, you know? It's dangerous. I mean, unless you're a misadventure, you know, and then you go up and you do it. But by Justice book, please. But outside of that, yeah, it's dangerous. And so, we spent a great deal of time on projects to try and measure with remote sensing instruments, and to produce data that could do that a little better, less treacherous and with more spatial temporal coverage so that we could have better models. Now the challenge is that produces a lot of data on a very short timescale, and a lot of times the scientists, you know, they wanna store the data, sorry you guys and girls, under your desk, and then you wanna take 12 to 18 months and review it later. But sometimes it has 24-hour requirements. To get it out, you need to use AI or machine learning to extract information faster, and that's where it could help. It's, imagine a cadre of robots to help Jess and all of her colleagues do this analysis faster and so that's the other potential to help us.

Jess: Well, I'm not giving up going out and sampling lava myself. I don't care how good the AI gets at it, and those little AI or the robot dogs that they have and they're trying to use for police work, I'm like, "You can send one of those into the lava, that's fine. It'll melt." But, okay. So, I have another sort of line of inquiry here, which is around open source AI, because there's tons of people worldwide who are contributing time and energy and they're creating publicly available code. So that means the pace of innovation is faster than it is in the corporate or government worlds. So, do you think open source is the future of AI?

Dr. Mattmann: That is a great question. And as someone who sat on the board of directors at the Apache Foundation from 2013 to 2018, I was like Mr. Open Source for a long time. Now, I'll tell you, I really appreciate open source and especially in the context of AI, it allows people to contribute. It kind of takes, in a way, the power away from some of these corporations. There's only 5 to 10 corporations that can generate a GPT-like model. You know, it costs on order, eight plus figures. It costs on order, many months to train on the world's most powerful supercomputers that only they have access to. So, open source AI and model building does democratize this in a way. And so, that's really great. So just case in point, Meta's initial model called LLaMA, which was their ChatGPT equivalent, trained up to 65 billion parameters, which is about the size, it's the size of the brain, if you will, of the model that it was trained on. It leaked, the first version. They didn't like that. So then Meta came out and they said, well, you know, we're gonna use open source licensing to kind of restrict the surface area, you know, the exposure or the damage area of this leak. But the problem is, when it leaked, so many people got access to it and they put it up on so many different places, and people were wondering, can I use this commercially? Can I use this academically? How can I use this? That what eventually happened from that is many, many, many derivative works were created, including one really cool one by the folks at Stanford called Alpaca. I don't know what the animal thing here is, but LLaMA, Alpaca, whatever, and I've seen just petting alpacas on imagery too. But they're cute. Anyways, but yeah. So, Stanford's Alpaca basically is a even more democratized version of the original LLaMA model. It is provided under a Stanford which I trust, open source license and things like that, that as an academic person or even someone in a commercial way, you could use in a much better way. Now, Meta learned a lesson from that, just like they did with React long ago when they released the React web framework and they tried to license it in a, you know, full open source license way that really benefited them, and it didn't work out so well, and they had to relicense it under a actual open source license. So with LLaMA-2, Meta did release their foundation model here in partnership with Microsoft. And people are like, yeah, it's open source or whatever. The devil's in the details, and so I would just watch out with that. Like, big corporations that are saying they're doing open source, they're not doing it so much. Am I a fan of open source AI in general and there's nuance there and open source? Absolutely. But the academics that are doing that, much more of a fan than the corporations where I, if you dig down into some of this stuff, it's not really open source at all and if you ever tried to use that to benefit you, it wouldn't work out so well. And then, the last thing I'll just say kind of related to that is there's big discussion now, because the open source ecosystem of AI models has made the big corporate... We have the G20, there's a C20, right, of corporations. And it has made the C20 a little, I'd say nervous because it's gotten out of their ability to control that. And so, even the first generation of ChatGPT I always tell people, the open source versions of that put into those Boston Dynamics robotic dogs that Jess is cool with melting in the lava, could do some big damage, like big damage in the wrong hands. That those models, imagine equipping one of those dogs with almost the ability to reason. I mean, there's a paper from Microsoft Research that says GPT-4, pretty much they're questioning whether Eric Horvitz, the Chief Science Officer at Microsoft was part of this is questioning whether it passes the turning test, which is the test as to whether it's sentient or not. Like, GPT-4 on itself is very powerful. The thing I tell people is, we should be worried about GPT-6 and 7, that likely, these corporations are already experimenting on growing, increasing their model size. Just imagine the power of those, but coupled with the fact that these things are already out there, even the current foundation models, there is cause for concern.

Jess: That is incredibly informative, and I did not think we were gonna be talking about ungulates on today's episode, but here we are. LLaMAs, Alpacas, you know, let's throw the whole four-footed friends into it too. So, okay. I'm gonna be wrapping up here with you in a second, which means that it's time for my favorite question, which is, you know, I represent the Union of Concerned Scientists, and this is our podcast. So, I always ask my guests an important question, Chris, why are you concerned?

Dr. Mattmann: I am concerned in general because AI is a very, very powerful tool that I don't think is being communicated the impact that it's going to have on people's lives and to the right sets of people. I think AI is almost like UFO. I think, you know, you say the word UFO 40 years ago and you got laughed off of TV, your pilot, you had your license canceled. It was all real hush-hush and quiet. And now with the serious study and rigor into these UAPs and things like that, people, and the realization that this has been going on for a long time unbeknownst to people, I think now with 70% of the world looking at things like UFOs and saying, "Hey, maybe I do believe in these things." The needle has moved on that. I think we're at a similar inflection point for AI. I think that too much is like AI ha ha, he-he, you know, it off without realizing kind of the true impact of it, again, on fundamentally people's lives, headcount reduction, skill transition. And I'm just fearful and concerned that we're gonna have a scenario in which we have maybe a large portion of our population that still wants to work on and is only trained to work on the buggy when the Fords have been dropped out here in South Pasadena, and they're driving on that 110 Freeway that was built for them and not the buggies.

Jess: Excellent. I really appreciate your insight and thank you so much for coming here to join me in the virtual world, and to talk about our hopefully not future robot overlords, and I really appreciate it, Chris. Thanks so much.

Dr. Mattmann: Thank you, Jess. I really appreciate you and thanks everyone for listening.

Jess: After I spoke with Chris, I wanted to direct the rest of this episode to the real-world implications of A.I. technology. Surely there’s a cost that comes from a great leap ahead, just like we’ve seen historically any time we adopt a revolutionary new technology. Lives are changed, our physical environment is impacted, and someone is always on the hook for paying the price demanded by that new tech. Let’s dive deeper. Scene: 4

Jess: Joining me for the second part of our show is Meriem Bekka, senior product manager for sustainability innovation at VMware, which is a tech company focused on cloud computing and virtualization. Meriem's work provides the social and environmental context that's sorely needed when we discuss the full ramifications and possibilities of artificial intelligence technology. Thanks so much for being here, Meriem.

Meriem: Thanks for having me, Jess.

Jess: Well, I'm excited to dive into this aspect of the AI story with you. So would you be able to start by giving us some context about how you came to work at the intersection of AI and environmental and social issues? It's definitely not something that I saw a degree program for when I was in college.

Meriem: Yeah. There certainly wasn't one when I was in college either. My background is actually in politics and economics, and my story with this starts back in 2010. I was finishing undergrad and doing research in Cairo and Damascus at the height of the Arab Spring. And so, it really started at the intersection of social media and social movements. So, these uprisings, it was the first time ever, you know, social media really came into popularity in the mid-2000s, so this was the first time ever that we had social movements of that scale propelled through social media. That's become a lot more common these days. We see people organizing a lot on social media, but at the time, it was unprecedented. And so, on one hand, it was really phenomenal and incredible to see the power that technology can have to scale that type of event within days and months, and then on the other hand, there was also an underbelly to this. So we saw an increase of disinformation and misinformation being spread and spreading like wildfire so much faster than it could through kind of popular forms of media. We saw the government targeting activists that were being more open on online platforms. And so, I at this point, started to become really interested in the implications that technology can have on society, the way that we shape technology and how it shapes us. I was really fortunate after that to start working with the former president Jimmy Carter in his conflict resolution department, leading back channel negotiations and we pioneered a project there that leveraged public data on the Syrian conflict to guide humanitarian efforts and back channel negotiations. And so, again, the Syrian conflict was the first conflict of that scale to happen in the age of social media, and so much of it was being documented online, which again, was unprecedented. Usually conflicts are a black box. We don't know a lot of what's happening inside until much after the fact that we can account for war crimes and understand what parties were involved. Whereas for the Syrian conflict with a little bit of Arabic translation and good analytics tools, we were able to see the different armed groups, the different areas of control and more effectively guide humanitarian delivery and back channel negotiations. And this work was supported by several technology companies, and it got me really curious as to what their incentives were. You know, why were they supporting this work? Why did they choose the Carter Center and us versus another organization? What was their interest in the Syrian conflict? And so, that made me want to... I do a lot of things out of curiosity, and so from just an anthropological perspective, wanted to join one of these companies and really understand how they were thinking about driving social change, especially because they play such a big role in it. I wasn't aware at the time if Facebook and Twitter were really seeing the impact that they were having in these countries as far as the way that they were shaping society and politics and government. And so, I was fortunate enough to be offered a position in the social impact department at VMware, and have been at VMware now for eight years. And I would say another really kind of pivotal point for me was around 2017 or 2018. I went to a lecture on AI with Kate Crawford, who is, I think she was a principal researcher at Microsoft and she's one of the co-founders of AI Now Institute, and that was the first time that I really grokked the influence that AI was having in society, and how this technology was being shaped. So, you know, your guests probably know at this point in the podcast a lot more about AI, and it uses massive data sets pulled from the internet primarily, and as well as other sources. And so, you think about what the data historically it has been using to train on, a lot of that has been based on people who've used the internet, which are limited to certain countries and certain demographics and certain wealth groups. And so, not everyone is represented. And so, that was such a huge insight for me that AI was not properly recognizing people who are more melanated, darker-skinned people. AI was potentially going to be used to try to... She shared an example of AI being used to try to recognize people from an LGBT+ community to prosecute them. And so, that made me... And the other side of this that makes it really concerning is that as humans, we have a bias to trust machines more than other humans. And so, there were also stories of people driving their cars off of bridges following Google Maps because it was telling them to go in a certain direction. And so, that was such a turning point for me as far as wanting to just sound the alarm everywhere. You know, there's this technology, it's coming really rapidly. It's influencing so many aspects of our lives. We'll rely on it in so many different ways, and we don't have a full understanding of the data that's building it and how that is influencing our society in a more biased way. Jess: That is such an interesting way to arrive in the AI world. And you really touched on a lot of things that are sort of, they're tangential to the AI experience but they're essential for us to examine as this technology becomes more widespread and more woven into our everyday lives. And so, I wanna ask you about some of the environmental realities that are going to, well, occur or are already occurring because we're using more AI. And, you know, we're not seeing big brown clouds of pollution coming out of smokestacks just because we ask Siri or Google a question, but what are the real-world consequences that using AI has for our planet, for the ecosystems, for us as humans?

Meriem: Yeah. That's such a great question, Jess, and it's true we don't see a big smoke cloud coming every time we ask or send an email or ask GPT a question. And even the term, the cloud, you know, it's such a misnomer because it gives you idea there's just like water and air, but it's truly not. And, the impact that the information communication technology industry has on the planet is massive in general, let alone AI. So, the ICT or information communication technology industry is responsible for 3% of the world's carbon footprint annually. And 3% doesn't sound like a lot, but that's actually more than the entire airline industry. And so, that's already massive on its own, and AI is just one part of the many digital workloads that is run in data centers. And so, just some stats that have been starting to come out in the past five, six years. In 2019, there was a study that was released that training a single AI model, that's 200 million parameters, so that's really regular, not the level of ChatGPT and LLaMA which are in the billions and billions of parameters. But a 200-million parameter model training takes the equivalent carbon emission as 5 cars in their entire lifetime. So, that includes the cost of creating the car which we call embodied carbon all the way through to the end of its lifecycle, all of the diesel that it consumes. We know that Microsoft used about 700,000 liters of fresh water during GPT-3's training, and that's the equivalent of what it takes to create 370 BMW cars. And so, that's just to train the model. And as far as what it emits on an annual basis, ChatGPT emits the equivalent of 8.4 tons of carbon dioxide a year, which is two times the amount of a person. And so that also might still sound a little bit like theoretical, but to make it really personal for folks, because hopefully folks have played around with ChatGPT or another AI chat bot. But a rather relatively brief conversation of 20 to 50 questions will use half a liter of water. And so, I was thinking, you know, I just learned that a couple weeks ago and I was thinking of all of the trivial things I've asked ChatGPT that I could have probably just Googled instead, and all of the impact that that might be having. And so, one thing that is really important for me and what I'm looking at in my work at VMware and on my team is how do we first just make that visible to people? Because a lot of people don't know. And, you know, I don't think, as we think of the millions of people all over the world using ChatGPT, most of us just aren't aware of that, and why would we be? I think it's important for people to have access to this technology and to be able to use it, and it's the responsibility of the people who are creating the technology and us on this side to make those impacts visible so that consumers can make informed choices. Jess: That is a lot of information to take in because I casually ask Google, like, you know, "What's the weather? Play me the news." I do that frequently, and that's not as intense as a ChatGPT conversation can be. But I know tons of people who have been really into using AI for ART and for questions, and I'm sure there's a lot of students out there trying to get out of writing their essays with it. But just to sort of, let's take this down the ethical road for a moment here. So, obviously with every technology, we have to reckon with questions of ethics on how we use that technology. Have you heard a lot of discussion with your colleagues about ethics and AI and what sort of guardrails do you think we need to have as we move forward?

Meriem: Yeah. So within VMware, we have had a discussion around ethics, and this was at least three or four years ago. VMware created its first code of ethics on having AI, principles around AI, and this covers...and I'm really positing, that includes sustainability in that, which is not always included. But it covers the things that you would see in any sort of popular regulation on AI today, so, algorithmic discrimination, AI safety, data privacy, notice and explanation. And so, there has been a hardy discussion, I would say, even speaking with my peers at other companies, a lot of folks are aware that this is an issue. Even a lot of our customers are aware that this is an issue, and probably proceeding with more caution to ensure that we know what the legal bounds are for investing in AI systems. But one thing that I would say is still a challenge across the industry and not just at VMware, is how do we operationalize some of these principles and ethical practices? And, that's probably, I would say the biggest challenge for practitioners in the ethical AI space is thinking about, as we explore what it looks like to have fair AI, to have more explainable AI to ensure that data protections are in place, how do we actually build that into our platforms and products and systems on the front end? And for a variety of technical and often political reasons, and sometimes just business reasons, you know, I think for most companies are held to shareholder value, and that's a tricky place to be in as well when you're trying to prioritize things that you might not immediately see bottom line impact for. But I think what we say and what we look at for our team is that, A, this is what's going to make us sustainable over the long term, and B, this is what's going to help our customers remain compliant as all of this various regulation is coming down the pipeline.

Jess: Huh, okay. That makes perfect sense. And it's great that the conversations, they're not just being started, they've been going for a while. And I mean, I know the AI really burst onto the scene for the general public just within the last year or so, but obviously it's been a growing force in the tech world. So, what I have noticed too is that every day seems like we get another report about a data breach. Like for example, I have to travel to Las Vegas in November, and I was just trying to check some different hotel options, and all of the MGM resort websites were down like Bellagio and MGM Grand. Like, you couldn't go online and make a reservation. So these sorts of data breaches, I mean, they got hacked and then held ransom, and that happens all the time. So, when some of these happen, obviously hotel reservations aren't as big of a deal, but if a data breach occurs and we've got personal identifying, sensitive information that can get out there about medical issues or finances, shopping habits, what role does AI play in data privacy, good or bad?

Meriem: Yeah. I feel like we're just at the beginning of understanding the scope of this challenge is security and privacy for AI. And I think that any company that can offer security and privacy has a really good position in the market with their AI platform, because that's something that I know a lot of folks, as they build out their own models, as they want to create their own AI for their enterprise, they're going to be paying a lot of attention to security and privacy. And, you know, this is, I think on one hand, yes, for the human side of it which is protected via some policies like GDPR, which comes out of the EU, but also on their business side, they need to make sure that their IP is protected, that their customer's data is protected. And so, I absolutely think that will be really important. And what's interesting with AI models is that especially large language, so as you know, I know your guests have heard throughout this podcast, there are many different forms of AI and large language models is just one type of AI. And these tend to be particularly vulnerable. You can influence them to give results that are inaccurate. You can coax them to provide information that they shouldn't be providing. And there was actually a really, really cool event that was the first one of its scale to happen at DEF CON this year. It was a red teaming event really pioneered by Dr. Rumman Chowdhury who used to lead AI ethics at Twitter, now X. And the event, it's a kind of old-school hacking mentality where you just had a bunch of hackers go to chatbots and try to like push them over the edge, make them say things that they shouldn't, or reveal things that they shouldn't. And so, you got some of them like, revealing people's credit card information, or just any information that it shouldn't be able to. And I think that those are great examples of initiatives that are helping us to understand the scope of vulnerabilities and risks in the privacy and security space. I would encourage people as we're learning more about this technology, just be really cautious about what you input into it. Stay aware of the privacy policies that companies are putting out. Zoom came under fire recently because they had updated their privacy policy and there was really ambiguous language about their ability to use recorded customer Zoom meetings in their AI training, and people were like... I mean, companies use Zoom and so they were saying, "Obviously we don't want that." And they had to come out and revise the policy and release statements to say that it was either misunderstood or they updated the policy, who knows what happened there exactly, but those are the kinds of things that consumers should really be aware of. And the FTC's working in our favor for now. They're also trying to go out there and understand, even for around copyright. I know that they're doing a survey right now asking the American people what they think should be allowable to train models. Should books that we know and love out in the public that have a copyright that are protected by copyright, should be used to train models? So yeah. Staying aware as we understand more of the scope of this challenge I think will be critical.

Jess: That is fascinating and vaguely terrifying. I'm just thinking about all the Zoom conversations I've had in the last few years. I don't know if you'd want to train an AI on that, because the AI would be really good at volcanoes and not very good at much else. Okay. So, this brings me back to the human side of things, and obviously we talked a little bit about the environmental concerns and ethics, but something that has been a lot more at the forefront of the national conversation for the last few years since the whole George Floyd murder and everything that resulted. So, discrimination happens. It happens in policies, it happens in environmental conditions, access to education, financial opportunities, the way authorities treat people, and it just goes across every aspect of our society. So does AI contribute to either discrimination or attempts to eliminate discrimination, and is there anything that we need to do when we employ AI to safeguard against this discrimination that we see happening?

Meriem: Yeah. That's a great question, and it's one of those where both things can be true, but just based on my bias and work, I lean more towards understanding the ways that AI is propagating discrimination. There's a great example I would share that might illustrate this. Amazon decided at some point to use a recruiting tool to help it scan through resumes. And this is done in really well intended, you know, because humans, we also have a ton of bias. There's so much research to show that just because of a name and the way the name is spelled or if a name seems like it might come from a certain background, that we would disregard a resume. So, their intent was, we'll use AI, it'll make us faster, and machines don't have bias, right? It's always, and I think that's the biggest misnomer that we've had is that we trust machines. We think machines are this robot that doesn't have emotions, and because it doesn't have emotions, it's not gonna have bias. We really think it will behave in that type of way, but unfortunately, the data it was trained on was historically who were top performers at their company which were predominantly men. And so in the end, this AI was excluding any resume that had the word woman in it. And so, even if you were part of like woman in engineering or a woman in coding group, it would exclude your resume, and so of course, they found this out really quickly and discontinued it. But I think it's such a great example of showing how the path to harmful AI is paved with good intention of wanting to do things in a scalable way, in a way that does not include human bias. There's another great example of this. POLITICO did a study a few years back on how AI was used in the legal system, and it was twice as likely to show that Black defendants were going to recommit crimes. And so I think, again, ways that it can absolutely impact people's lives. And a few, a couple of authors that have really influenced my views on this, professor Virginia Eubanks, she wrote a book, Automating and Equality that speaks to how technology is used in government services, and how it's actually propagating inequities in our society, and it's really interesting. So, I encourage folks to check that out as well as professor Safiya Noble who wrote "Algorithms of Oppression" and that, she was pioneering work in how the search algorithm, our regular search algorithm was also propagating bias. So things like when you would search professional hairstyles, the main thing that would come up is straight hair or kind of Eurocentric White hairstyles. And so, there's a lot of work showing how AI is propagating existing bias in our society. And I think the second layer concern I would encourage folks to consider is how we trust technology more sometimes than humans. And so, making sure we have an extra critical eye when we get an answer from ChatGPT, like, that is not at all...like, it should be treated as the source of truth.

Jess: So, essentially we need to make sure that we are training the AI to work towards the better angels of our nature and not the juvenile or nasty parts that we, a lot of us have and sometimes we're not even aware. So yeah. It makes perfect sense that the machines are gonna be only as good as the people who create them, so we have to continually examine what we're doing and why. That is fascinating about that, the example you gave. I love it and I believe it. So, because we've let this AI cat out of the bag, it's not going back, and I always like to ask a last question because we are the Union of Concerned Scientists. In the age of machine learning, neural networks and self-driving cars, Meriem, why are you concerned?

Meriem: Probably too many reasons just given I'm a recovering cynic. But I would say right now, I'm most concerned that we will miss an opportunity to shape this rapidly emerging, game-changing technology in a way that serves everyone effectively. And, I do wanna end on a note of hope though, because I think that that is the counter to this or maybe the cure, is I truly believe that hope is a discipline. And I know more people will be talking about these Terminator scenarios, and I urge people again, think critically about that. Think about the issues that we have today. But, you know, as things feel really overwhelming or scary or grim, just holding onto a sense of agency and resolve that we can make things better, we do have the power to do that. And I think that right now, specifically with AI, it's a game-changing technology and we need more people engaged to shape it in a way that serves us all. So, I hope folks listening to this feel inspired to get involved, have their voices heard, and be part of shaping our future.

Jess: Excellent. Thank you so much for sharing your expertise and insight, Meriem, and hopefully we'll talk again in the future with some more cool AI developments.

Meriem: Thanks so much, Jess. I appreciate it.