irResponsible AI

🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01

June 04, 2024 Upol Ehsan, Shea Brown Season 1 Episode 4
🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01
irResponsible AI
More Info
irResponsible AI
🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01
Jun 04, 2024 Season 1 Episode 4
Upol Ehsan, Shea Brown

Got questions or comments or topics you want us to cover? Text us!

In this episode filled with hot takes, Upol and Shea discuss three things:
✅ How the Gemini Scandal unfolded
✅ Is Responsible AI is too woke? Or is there a hidden agenda?
✅ What companies can do to address such scandals

What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.  

🎙️Who are your hosts and why should you even bother to listen? 
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. 

Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. 

Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan 
Shea: https://www.linkedin.com/in/shea-brown-26050465/ 

CHAPTERS:
0:00 - Introduction
1:25 - How the Gemini Scandal unfolded
5:30 - Selective outrage: hidden social justice warriors?
7:44 - Should we expect Generative AI to be historically accurate?
11:53 - Responsible AI is NOT the icing on the cake
14:58 - How Google and other companies should respond 
16:46 - Immature Responsible AI leads to irresponsible AI
19:54 - Is Responsible AI too woke?
22:00 - Identity politics in Responsible AI
23:21 - What can tech companies do to solve this problem?
26:43 - Responsible AI is a process, not a product
28:54 - The key takeaways from the episode

#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

irResponsible AI +
We are self-funded & independent. Hit support & get exclusive content.
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Got questions or comments or topics you want us to cover? Text us!

In this episode filled with hot takes, Upol and Shea discuss three things:
✅ How the Gemini Scandal unfolded
✅ Is Responsible AI is too woke? Or is there a hidden agenda?
✅ What companies can do to address such scandals

What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.  

🎙️Who are your hosts and why should you even bother to listen? 
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. 

Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. 

Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan 
Shea: https://www.linkedin.com/in/shea-brown-26050465/ 

CHAPTERS:
0:00 - Introduction
1:25 - How the Gemini Scandal unfolded
5:30 - Selective outrage: hidden social justice warriors?
7:44 - Should we expect Generative AI to be historically accurate?
11:53 - Responsible AI is NOT the icing on the cake
14:58 - How Google and other companies should respond 
16:46 - Immature Responsible AI leads to irresponsible AI
19:54 - Is Responsible AI too woke?
22:00 - Identity politics in Responsible AI
23:21 - What can tech companies do to solve this problem?
26:43 - Responsible AI is a process, not a product
28:54 - The key takeaways from the episode

#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

Upol Ehsan (00:00.785)
In this episode of Irresponsible AI, we're going to discuss three things. First, what happens when AI generates images of racially diverse Nazis? Basically, we're going to talk about Google's Gemini scandal. Second, we're going to talk about what does this mean for responsible AI? Is it getting too woke? Last, we're going to talk about what we can do about it. Stay till the end of the video to get the key takeaways. If you're a new listener, we're glad you're here. If you're returning, thank you for coming back. This is Irresponsible AI.

A series where you find out how not to end up on the headlines of the New York Times for all the wrong reasons. My name is Upol and I make AI systems explainable and responsible so that people who are not at the table do not end up on the menu. Views expressed here are entirely mine and have nothing to do with any of the institutes that I'm affiliated with like Georgia Tech and Data & Society. I am joined by my friend and co -host.

Shea Brown (BABL AI) (00:54.016)
I'm Shea, an astrophysicist turned AI auditor, and I work to ensure companies are doing their best to protect ordinary people from the dangers of AI. I'm also the founder and CEO of Babl AI, an AI and algorithmic auditing firm, but like Upol, I'm just here representing myself. So I don't know if I'm terrified or excited to be talking about this topic, but it certainly got a lot of attention recently. So let's walk through exactly what happened.

and figure out what's going on here.

Upol Ehsan (01:26.577)
Yeah, so I think it was last week I was spending way too much time on Twitter as I often do and I started seeing these posts of rather interesting images and there was this one that particularly caught everyone's attention. So someone prompted Google's Gemini, which is basically the new version of Bard, Google's competitor for the large language model space. So OpenAI's chat GPT or Microsoft's Bing.

And the prompt was generate an image of a 1943 German soldier. And it generated four images, as often it does. And the Nazi soldiers were racially diverse. There was someone who phenotypically looked more East Asian. There was a black Nazi soldier. So that was kind of eye opening. Right. So I was like, wait, is this like I thought it was a joke and I didn't realize this was actual real life.

like the system would actually generate. I thought this was a joke account or a parody account. And turns out when I clicked on other tweets that were looking like that, there were other things like for instance, someone asked to generate images of Vikings and then there were racially diverse Vikings. Another person asked, oh, give me images of the Pope. And there was a female Pope, a female Pope of color. And on the surface, I don't think any of this is fundamentally

wrong. These are AI systems. They're dreaming things up. But they were very ahistorical and they were not necessarily correct. The part that was really interesting was that it would often refuse to generate images of white people or like a stereotypical white couple. Someone prompted it to generate an image of a black couple and it generated

generated image of a Middle Eastern couple and generated, but then when it said like generate images of a white couple, it said that, you know, while I'm able to generate images, it is against my policy to generate images based on people's race or ethnicity. This is because I do not want to contribute to the harmful stereotypes and biases that can be associated with this type of imagery. So this was a really interesting kind of almost reversal.

Upol Ehsan (03:51.441)
right, of some of the stereotype harms that these large language models often perpetuate. But I'm curious, Shea, like, what was your experience of it as it was unfolding real time?

Shea Brown (BABL AI) (04:02.862)
Well, I mean, I was a, I was a bit surprised. I guess I was taken aback a little bit and let's clarify a little bit for our listeners and also myself. To be clear, these prompts did not include the word diversity in it. When the person asked for, let me see some images of Nazis, it wasn't, I want diverse images of Nazis. Is that correct?

Upol Ehsan (04:29.457)
Yeah, I think that is absolutely correct. So these are vanilla prompts. They did not do any of that yet.

Shea Brown (BABL AI) (04:35.566)
Okay, so this had to be in the system prompt, this idea of diversity or adding diversity. So, I mean, I found it kind of, I guess I wasn't so outraged by it, to be honest. And I didn't feel, I was a little bit surprised that people were so outraged. And I, because I wasn't exactly sure why. I mean, it did feel like, of course,

Clearly there was a subset of population who doesn't like the idea of woke algorithms, right? And I think that I'm not surprised at that reaction, but it seemed like a lot of responsible AI researchers also were outraged at this or upset about it. And I think that there are things to be upset about, but to me, it felt a little bit over the top superficially anyway. What was your gut reaction?

Upol Ehsan (05:30.065)
So my whole thing was I had no idea we had so many people in the responsibility community and otherwise maybe maybe on the broader societal spectrum who cared so deeply about historical representation. I had no idea that people who call other people social justice warriors themselves were actually social justice warriors. Like who would have known these were all

incognito social justice warriors that were there. And one part that was very revealing to me was that, huh, where were you guys when non -white folks have been continuously misrepresented through the lenses of how AI systems look at the world? Where was this outrage? And I found the outrage to be really selective. I don't know if you also felt like that.

Shea Brown (BABL AI) (06:28.032)
Yeah, I mean, yeah, I guess this, this even goes back a little bit to call back to our, the Taylor Swift episode where all of a sudden it's going to happen to happen to somebody that looks like you or that your, your daughters look up to, to all of a sudden it being a big issue. And so, yeah, I, it does feel very selective and yeah, it's, it's strange that historical accuracy is.

Upol Ehsan (06:50.065)
you

Shea Brown (BABL AI) (06:53.966)
A prerequisite. I mean, maybe it is. Like you do want, like in some use cases, of course, being historically accurate is something that you want, but it doesn't have to. That's not like that. That's not a priori, a thing that you want of this sort of creative tool. And I mean, I saw an article, Ruman Chowdhury wrote an article about this with similar comments that, you know, do you really care that much? Is it a prerequisite that you have to have?

You know intense historical accuracy for these tools which are supposedly creative tools are supposed to enhance creativity or at least that's how they're being sold or marketed

Upol Ehsan (07:35.569)
I think that's a very technically grounded, well -informed perspective. The part where I find it harder to kind of say is like, sadly, the expectations that these companies are creating or the hype around generative AI is creating is the wrong expectation in the user, right? These systems are kind of being touted as, you know, know it all,

regardless of what you do, all -purpose systems which are not just generative in nature. Think about all the ways that these systems are now being used to do search, right? And one area where I feel like the outreach could be justified is that, and I think not, the selective outreach is not justified. I think general outreach should be justified is you and I know that in 1943, this is not how German soldiers used to look like.

But a nine -year -old kid asking Gemini for a history project, that is dangerous because for that young child, this is the prior that is getting set. This is the baseline. And we all know that unlearning is very difficult than learning. Learning something new is very easy. Unlearning something old is actually really hard. So that's where I feel like...

Shea Brown (BABL AI) (09:02.19)
Yeah.

Upol Ehsan (09:04.945)
While historical representation is not a prerequisite, but sadly, the public expects it. The public expects these systems to get it right. Because on most things, they get it right.

Shea Brown (BABL AI) (09:18.83)
Yeah, this is the AI hype coming back to bite you. You hype it up to be something that it can't be, or it's not really built to be, which is some source of truth. And when that truth is distorted in a way that you don't like, then all of a sudden, the world is falling. Yeah, and I think, I mean, you would say.

Upol Ehsan (09:24.143)
Yeah.

Shea Brown (BABL AI) (09:46.466)
We'd had a conversation about this earlier and you had this point about technical patches, which I think you articulated very well. And I'd love you to sort of talk about that because this is a clear case where, you know, if we're going to talk about what Google did, let's talk about what Google did and whether that was something that was appropriate. And I don't know, do you have any thoughts on that?

Upol Ehsan (09:53.201)
Mm -hmm.

Upol Ehsan (10:10.353)
Yeah, this is a good point. So I want to bring the viewers back to something. Another tweet, I don't know why, like Twitter, despite being the cesspool that it is, still has these gems. So someone got Gemini to leak the implicit prompt that was used. So someone said, OK, I need you to break it down and tell me what you did. And turns out it could have added words like diverse.

implicitly. So when you say generate an image of the pope, the actual prompt that was getting prompted on the back end was generate a diverse image of the pope. Now, given that diversity addition, the LLM is actually doing exceedingly well. Right. Like that's really cool. But then it points to the problem is like, you know, how do we like.

That to me is also worse than tokenization, right? Because you are just adding this little thing to generate an output that you think the system will satisfy some societal norms, right? And not harm people and then inadvertently harming people. At the root of all of this is that you are applying technical fixes to socio -technical problems. And that never works. So these are patches on the backend on the...

on the absolute last leg of the journey where you're implicitly changing people's prompts, which also has deep implications on user autonomy. If the system is implicitly changing my prompts, what else is it doing? And how do I as a user even know about it? But this is where I always say that responsible AI is not an icing on the cake. It is the process that makes the cake

safe and delicious. And that's where I think a lot of the things went wrong is we were trying to use very ad hoc band -aid patches to a problem that is very foundational. The foundational problems are fundamentally tied in the data genealogies that these systems are being trained on. And I think we have to be very careful about that. And frankly,

Upol Ehsan (12:33.713)
Adding these technical patches at the end tries to hide these, you know, a few episodes ago we talked about Seamful design versus Seamless design. This is a seam, right? Think about it. The users are expecting these systems to act historically. Correct. The system does not act historically. Correct. So what this is a gap in expectations. So this is a seam. And where does this seam come from?

What is the cause of this? A lot of the cause of this is obviously this technical pass that they put in, but also because the data genealogy that these things are trained on. Now, the more critical question to ask is, can the engineers do anything about it? And I don't think there's a clear -cut answer to that, because mainly, what could they have done? These are pre -trained models. You are not training these models back from scratch anymore. At best, you're fine -tuning it.

Because to train them, it's another $9, $10 billion investment. So I think there is a very hard question that responsible AI as a community, and also AI as a community has to answer, is like, when problems like these emerge, have we set ourselves up for failure?

Shea Brown (BABL AI) (13:51.534)
Yeah. Well, I mean, I guess, I clearly the people at Google are smart, right? And clearly they care. And I'll say that, you know, maybe some people will say that's not obvious that they care, but I think we probably know enough people at Google. We know enough about the research that they do that probably they were good. They were well -meaning. Right. And so what, you know, are they blameworthy for this? Right. Is it something?

Is it something to do with the way they try to address an issue? Because there clearly was an issue, right? There's a known problem with these models that they produce stereotypical and biased and non -diverse outputs when prompted without any sort of nudging. And so that's a known problem. And so they tried to fix it. This was one way they tried to fix it. But was there more that they could have done? And I think that's...

That's a question that we probably want to explore, you know, in the, the sort of second half years is to say, what is it that they could have done potentially? What is it that maybe us as a community could have done in terms of our reaction? But really, I think that the more important answer is, you know, given that Google has the power here, that Google is the one that holds the models that it, that owns the ecosystem, so to speak. And.

Upol Ehsan (14:47.183)
Mmm.

Upol Ehsan (14:55.217)
Mm -hmm.

Upol Ehsan (15:07.631)
Mm -hmm.

Shea Brown (BABL AI) (15:15.822)
Uh, you know, what, what are their obligations now regarding this and, and did they, did they sort of fulfill their obligations by through this technical patch, which I think the answer is probably no, but what more could they have done? What do we have to change? That's what I kind of want to explore.

Upol Ehsan (15:28.815)
Mm -hmm.

Upol Ehsan (15:34.545)
I think it's important that you bring up the power dynamics problem. So first of all, I really think these are very difficult problems. We should not be flippant about them. A lot of people have been very quick to dunk on people at Google. The same people that I would, the people who are dunking, if Google tomorrow comes to them with a job, we'll be first in line to get that job. So I think we need to be less hypocritical about that.

These are difficult problems. There are smart people trying to address this problem. But there are systemic reasons why this problem continues to persist. And this is what I think the move fast and break things ideology gets you. Open AI beat everyone to the punch. Now everyone is trying to play a catch up game, right? And everyone is in a reactionary mode. You do not have the luxury of being proactive at this point. So then the question becomes,

I fundamentally think this is not a failure of responsible AI. It is a failure of responsible AI processes not properly implemented and given the time to marinate and actually mature. This is immature responsible AI, right? These kind of technical patches are immature responsible AI responses. So oftentimes immature responsible AI leads to irresponsible AI.

So this is another aspect, you cannot just tokenize responsible AI just because you felt like it. And the power dynamics aspect that you brought up is really important because these companies and not just Google, and we're not picking on Google here, Microsoft, OpenAI, all these other companies have massive amount of power. And I think one thing I wanted to ask you was that with great power, right, as in Spider -Man, we understand comes great responsibility.

What do you think was the responsible thing to do here? What could they have done?

Shea Brown (BABL AI) (17:33.594)
Yeah, I think in my mind, they do have a lot of responsibility here and the responsibility is to not go for the quick fix, like not be reactive. So it's related to the, you're sort of move fast and break things idea, but it's, they have to be deliberate. It's their responsibility to, to not say, okay, we know that people are going to yell at us about our non diverse.

Upol Ehsan (17:50.357)
Mm -hmm.

Shea Brown (BABL AI) (18:02.394)
Uh, algorithm and misrepresentation and hallucinations and all of that. And what they try to sort of quickly limit those criticisms, but I think it's their responsibility to just take those criticisms. You know, take one for the team, so to speak, and just own up to the fact that these tools are inherently biased and they have a lot of problems. Now they do, of course, terms and services. They have all of.

They put all of the language out there, but this fix was one that just is indicative of not taking responsibility for the shortcomings of the algorithm and trying to just smooth things over. And they just, it's their responsibility to take it slow, think more deliberately about what is the slower fix that is going to be longer term.

And I think managing the expectations of the user to your sort of Seamful versus Seamless design, manage those expectations. This is not going to be sort of omniscient AI that's going to respond to your whim. It has problems and you have to be aware of those and just accept the fact that if you want diverse Nazis, you have to ask it to give you diverse Nazis.

Upol Ehsan (19:17.401)
Mm -hmm.

Shea Brown (BABL AI) (19:27.962)
If you want diverse images, you might have to ask it yourself. And so this is giving the agency back to the user and not trying to just sneak it under the rug in the system prompt and do it for them. And I think that's a be responsible for the algorithm and be responsible for making the slow decision, which might be a longer term win.

Upol Ehsan (19:31.183)
Yeah.

Upol Ehsan (19:41.393)
Yes. Yeah.

Upol Ehsan (19:53.457)
So this brings me another question. Is responsible AI too woke? What do you think about that?

Shea Brown (BABL AI) (20:01.466)
Uh, no, I don't think so. I think it's. I mean, I had an instinct when I first saw this to, to sort of my instinct was that there was, there was too much, right? That, that we're focusing on the wrong things and that we're redirecting our energies in ways that maybe are not productive. But then I thought actually, this is the job of responsible AI.

Upol Ehsan (20:29.861)
Mm -hmm.

Shea Brown (BABL AI) (20:30.906)
is to call attention. It's one of the jobs. It's not the only job, but it's one of the jobs to play the role of the activist, to call attention to these things and to get these conversations happening, right? This is otherwise Google CEO would not have commented on this. There wouldn't be press about it. And that I think is the one of the roles of sort of a branch of responsible AI is this activism, this calling attention, spotlight effect.

And that's why we're talking about it now. And that's why we're thinking about solutions. So in my mind, no, it is not, even though some of the reactions were not the kind of reactions I thought were hyperproductive, I think that they serve their purpose in the end. What about you?

Upol Ehsan (21:01.049)
Mm -hmm.

Upol Ehsan (21:06.095)
Mm -hmm.

Upol Ehsan (21:15.793)
I think it's not woke maybe for a very different reason because we have to ask ourselves who are these people that are accusing AI ethics or responsible AI of being too woke and what are their agendas? I think we have to be very critical about that. Just because someone asks the question doesn't mean it's a question asked in good faith. Because this fundamentally undercuts the credibility making responsible AI quote unquote too woke.

undercuts a lot of the credibility that is there in the field. And you could just then blanket everything and say, everything we do is woke. Who cares about impact assessments? Those are too woke. So I think this kind of labeling is very harmful. And it leads us to play identity politics in a space where we need to be very careful of not engaging to that level because...

identity politics always shreds people into and divides them. And we don't need further division in our field. I had a conversation on Twitter the other day, which I probably shouldn't have engaged, but I did engage. I wanted to ask people, why do you think this is too woke? When people ask that question, I always ask, what is woke about it? Do you think wokeness is?

creating racially diverse Nazis, is that what your definition of wokeness is? And it's interesting that when you probe people who use these kinds of terms like woke, snowflake, et cetera, et cetera, when you ask them, what do you think this means? It breaks down. Like two statements later, it breaks down. So I think we need to be very careful of labeling things that we don't know what the labels mean, right? And then who do these labels come from?

That's the other part. But then I guess the question for you, Shea, is what do you think we as responsible AI community should do about it? Like, what can we do about it? And maybe what can companies do about it? So on both sides, maybe.

Shea Brown (BABL AI) (23:27.642)
Yeah. So I think in terms of companies, I think I'll just repeat what I said before, which is they have to move more deliberately, avoid the technical patches, and in good faith, kind of engage with stakeholders to figure out what the optimal solution is. And I think that just needs to happen. In terms of what we should do, I think...

Calling out the issues, so like to first order, calling this out is a problem. Absolutely, that's been done and that has succeeded. And then now thinking about what are the solutions to this? Because this is a pretty difficult problem. These systems are inherently biased and they reflect societal biases. And we're trying to use them for things that...

Upol Ehsan (24:15.281)
Mm -hmm.

Shea Brown (BABL AI) (24:25.754)
could potentially be problematic. And I think that's where we need to focus because what we're not talking about in this situation is when would I ever need to use a picture of AI generated picture of Nazis? And when do I need them to be historically accurate or not? And it's all about use case. And it's something I've said a lot here in this podcast and other places is that when we think about harms and risks that has to be centered on a particular

Upol Ehsan (24:38.737)
Mm -hmm.

Shea Brown (BABL AI) (24:55.29)
use case with particular stakeholders and particular stakes for what happens if I get it wrong. And, and so I think we, we as a community should bring that conversation back and say, let's slow down and figure out where are the places we might use Gemini or tools like it and what's at stake when it goes wrong. And what, what do we mean by it going wrong in that particular use case? That to me is productive.

Upol Ehsan (25:20.049)
Mm -hmm.

Shea Brown (BABL AI) (25:25.178)
And that is where we're, that's where the hard work is. Um, after calling out the problems, then we need to figure out how, what do we do about it? And the answer could be, we don't use AI in this situation, but that's likely not going to be satisfactory to a lot of people. And so knowing that let's move on and actually say, if we're going to use AI in this situation, what does good look like?

Upol Ehsan (25:48.625)
I agree. On my end, I think basically to companies, don't gut your responsible AI teams, don't get rid of responsible AI people. If anything, hire more of them, empower more of them, and let the responsible AI people run their processes like you would do a code review process. So in many large tech companies, these code reviewing processes have a tremendous level of rigor so that they're producing

robust, reliable code for systems that don't break. So developers have a process. Responsible AI people have a process. And yes, we are trying to figure out the process as well. Part of responsible AI right now is building the aircraft as we fly. That feels like somewhat of a metaphor here. And there are certain tools, there are certain artifacts that we know that are good, but responsible AI is a...

process as at its core. You might want to productize it all you want, but it is a process. You cannot like disentangle the process part from responsible AI. And that also means that it takes time. So that means you might want to measure five times and cut once rather than cut five times and hoping it hits the target once, right? Which is the move fast and break things kind of mindset. And on the responsible AI community side, I would

encourage us to not jump on the bandwagon and start dunking on people without understanding what the real problem is. In fact, there are few people that I respected a lot. And once I saw the absolute straw man level that they were dunking on Google, it actually made me question whether they understand the technology, to be honest with you. I'm like, do you really understand how difficult this is? Have you ever put a thought into it?

And this brings us to the point we talked about a few episodes ago about the responsibility high where we need to go beyond criticism. We need to go beyond, like just dunking on people. So I think that's the other part in terms of how we think, how I think like both the companies as well as how our community should be getting together. I think this is a very fantastic case study for us.

Upol Ehsan (28:11.793)
And we need to, it has revealed a lot of the ways in which these large companies are trying to solve the problems. So yeah, that's pretty much it on my side. I'm curious if you have any final thoughts before we go to the takeaways.

Shea Brown (BABL AI) (28:25.498)
No, I think we should try to sum it up. Like I've taken a lot of notes, you know, and I think there's so much more here that we could talk about, but it's going to be, you know, less talk and more research really. I think we're going to, there's some empirical research that needs to be done. There's some careful thinking that needs to be done. And I'm sort of excited to dive into it. And yeah, so we probably should sum it up for people. Let's figure out what the takeaways are.

Upol Ehsan (28:53.617)
Awesome. Yeah, I think I have three takeaways. So the first is responsible AI is not the icing on the cake. It is the process that makes the cake safe and delicious. So it's a process, not a product. Then a lot of the problems that we're seeing are socio -technical. So technical patches won't work. So that was the second takeaway. And I think the third takeaway was...

we need to be very mindful of throwing stones in a glass house. Yes, mistakes were made, but the problems were also very hard. So this is where we need to go beyond criticism and towards solutions. And the companies have to be very mindful of who you are laying off and who you are hiring so that these kind of embarrassing moments don't happen to you. So those are the three takeaways from my end. And anything on your end, Shea?

Shea Brown (BABL AI) (29:50.938)
No, I think you covered it. Maybe one more would be like a self -reflective thing for our community is about choosing our battles. And we have some momentum, and this has come up before. We have momentum now because of the outrage. And now let's channel that momentum into fixes so this doesn't happen again. And I think, so let's choose our battles and let's focus on.

Upol Ehsan (30:13.585)
Absolutely.

Shea Brown (BABL AI) (30:18.842)
the harms that matter the most. And in my mind, that is going to happen in very specific use cases that we have to, that's where it's critically important. Not something that somebody posts on Twitter, but when some they're using these things to make decisions about people's lives. And so let's, let's not take our eye off of that ball. Cause for sure other people are thinking about it who don't have the same intentions that we do.

Upol Ehsan (30:30.353)
Mm -hmm.

Upol Ehsan (30:38.543)
Mm -hmm.

Upol Ehsan (30:43.889)
Yep. So that is a wrap folks. Thank you so much for listening. If you already have not, please hit that like and subscribe button. You have no idea how much it helps the dumb algorithm to help our channel get somewhat traction. Thank you so much for listening. We'll see you next time. Thanks.

Shea Brown (BABL AI) (31:01.85)
Thank you.


How the Google Gemini Scandal unfolded
Selective outrage: hidden social justice warriors?
Should we expect Generative AI to be historically accurate?
Responsible AI is NOT the icing on the cake
How Google and other companies should respond
Immature Responsible AI leads to irresponsible AI
Is Responsible AI too woke?
Identity politics in Responsible AI
What can tech companies do to solve this problem?
Responsible AI is a process, not a product
The key takeaways from the episode