irResponsible AI

🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01

June 04, 2024 Upol Ehsan, Shea Brown Season 1 Episode 3
🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01
irResponsible AI
More Info
irResponsible AI
🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01
Jun 04, 2024 Season 1 Episode 3
Upol Ehsan, Shea Brown

Got questions or comments or topics you want us to cover? Text us!

As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI: 
✅ Taylor Swift's deepfake scandal and what it did for RAIg
✅ Do famous people need to be harmed before we do anything about it? 
✅ How to address the deepfake problem at the systemic and symptomatic levels

What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.  

🎙️Who are your hosts and why should you even bother to listen? 
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. 

Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. 

Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan 
Shea: https://www.linkedin.com/in/shea-brown-26050465/ 

CHAPTERS:
0:00 - Introduction
01:20 - Taylor Swift Deepfakes: what happened
02:43 - Does disaster need to strike famous people for us to move the needle? 
06:31 - What role can RAI play to address this deepfake problem?
07:19 - Disagreement! Deep fakes have both systemic and symptomatic causes
09:28 - Deep fakes, Elections, EU AI Act, and US State legislations
11:45 - The post-truth era powered by AI
15:40 - Watermarking AI generated content and the difficulty 
19:26 - The enshittification of the internet 
22:00- Three actionable takeaways 


#ResponsibleAI #ExplainableAI #podcasts #aiethics #taylorswift

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

irResponsible AI +
We are self-funded & independent. Hit support & get exclusive content.
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Got questions or comments or topics you want us to cover? Text us!

As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI: 
✅ Taylor Swift's deepfake scandal and what it did for RAIg
✅ Do famous people need to be harmed before we do anything about it? 
✅ How to address the deepfake problem at the systemic and symptomatic levels

What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.  

🎙️Who are your hosts and why should you even bother to listen? 
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. 

Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. 

Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan 
Shea: https://www.linkedin.com/in/shea-brown-26050465/ 

CHAPTERS:
0:00 - Introduction
01:20 - Taylor Swift Deepfakes: what happened
02:43 - Does disaster need to strike famous people for us to move the needle? 
06:31 - What role can RAI play to address this deepfake problem?
07:19 - Disagreement! Deep fakes have both systemic and symptomatic causes
09:28 - Deep fakes, Elections, EU AI Act, and US State legislations
11:45 - The post-truth era powered by AI
15:40 - Watermarking AI generated content and the difficulty 
19:26 - The enshittification of the internet 
22:00- Three actionable takeaways 


#ResponsibleAI #ExplainableAI #podcasts #aiethics #taylorswift

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

Upol Ehsan (00:01.54)
In this episode of Irresponsible AI, we're going to discuss two things. First, we're going to talk about Taylor Swift and deepfakes. And second, we're going to talk about what does that mean for us in responsible AI and what we can do about it. If you're a new listener, we're glad you're here. If you're returning, thank you for coming back. This is Irresponsible AI, a series where you find out how not to end up on the New York Times headlines for all the wrong reasons. My name is Upol and I make AI systems explainable and responsible.

so that people who are not at the table do not end up on the menu. Views expressed here are purely my own, nothing to do with any of the institutes I'm affiliated with like Georgia Tech and Data & Society. I am joined by my co-host and friend.

Shea Brown (BABL AI) (00:43.926)
I'm Shea, an astrophysicist and AI auditor, and I work to ensure companies are doing their best to protect ordinary people from the dangers of AI. I'm the founder and CEO of Babl AI, an AI and algorithmic auditing firm, but like Upol, I'm just here representing myself. So we've got a pretty hot topic that we're covering this time, Taylor Swift and the deep fakes. I don't know if we're trying to be clickbaity, but...

It does seem to be pretty apropos.

Upol Ehsan (01:15.792)
Yeah, and you know, to be honest, this was the first time I think deepfake. So just to give viewers an idea of what happened, recently there has been some inappropriate, sexually inappropriate deepfake images passed on social media, primarily for channel Twitter. Some of the images have received 47 million views. And you know, we all know how deepfake in particular that uses a machine learning technique to create images.

oftentimes inappropriate images, but this time it happened to hit one of the most popular figures in the global arena, Taylor Swift. And the part that has been very interesting, Shea, is that for the first time I feel like from the White House to Microsoft CEO, to everyone, and it's a bipartisan, like when was the last time you get a bipartisan kind of effort going on in this country anymore?

All of them have been advocating a lot about how do we wrangle this thing? How do we tackle this thing? And for the first time, I feel like despite these harms have existed for a long time, we finally have a spotlight on it. I'm curious to hear your thoughts. What did you take away from this?

Shea Brown (BABL AI) (02:37.042)
Yeah, I mean, I think for sure the spotlight effect is happening. And I mean, the big question is, and I think we've talked about this before, you know, why does it take someone famous, why does it take happening to somebody famous for this to be, to move the needle on this issue? And I think that's a big question on this. I think that the second question, which we will probably tackle a little bit later is

now that the spotlight's there, what do we do about it? But yeah, I mean, what are your thoughts? Like clearly that is something that is concerning that now all of a sudden happens to somebody famous, everybody cares about it. Is that okay?

Upol Ehsan (03:23.328)
Huh, I don't know. I don't know if it's okay or not. Sometimes it's revealing. I remember during COVID, I think the whole world took COVID seriously when Tom Hanks had it. It almost feels like we need to have a celebrity or a very famous person to be affected by a tragedy for all of us to take it seriously. Another parallel with COVID is recently the Senate had hearings on Long COVID. And all of a sudden people are serious because a lot of the senators themselves have Long COVID or have relatives who have Long COVID.

So then the question becomes, as a society, what does that really tell us, that we need to have a famous person or someone in positions of privilege and power to have been impacted by this? And with the Taylor Swift incident, what was very revealing to me is how swiftly things happened, no pun intended, right? Because it was just unanimous from all sectors, the industry all of a sudden wanted to get their act together. And-

What is interesting though is deep fakes, right, by themselves are not always harmful. Think about the initial use cases, right? HR training videos, or bringing Tupac alive, or making a collab between Eminem and Biggie, for instance. And with lyrics that you wrote, but sound like Eminem and Biggie. So the whole point I think is that

how we are weaponizing certain techniques in responsible AI that has obviously very bad effects, but because guardrails have never been put around this in a meaningful way, we are now suffering. Which brings me to this question to you is, like, what kind of guardrails can we put inside these deep fake things? And just curious to hear your thoughts on that.

Shea Brown (BABL AI) (05:17.918)
Yeah, I mean, I think there's kind of a fundamental question of, in my mind, what do we as a society, what choices do we need to make as a society here? Because in my mind, as a responsible AI practitioner, I'm not sure I'm ready to make calls for what we want. This is such a new situation. It's not like,

Like you said, the harms are not, it's not obvious what the harms are. It's not like just totally obvious that it's a bad thing to use a deep fake to do something with. It's whether you do something hurtful with it. And so we don't wanna totally regulate away the technology in general, but we do wanna figure out as a society, what's our limit, what's our line in the sand? And I think in my opinion, I wanna get, I think,

We're kind of moving on to our second topic a little bit here, but, uh, you know, what are we, now that we have the spotlight on this issue, separate from the fact that, you know, there's prob it's problematic that the spotlight we took Taylor Swift to get the spotlight there in the first place, but now that we have the spotlight there, what role do we have to play as responsible AI practitioners? And my take on it, my initial take is that there needs to be a lot more.

stakeholder input society needs to weigh in because there does have to be a line in the sand and then after that line in the sand Saying this is inappropriate. This is harmful Then we can start doing our job figuring out. How do we? Inculcate that how do we enforce that? How do we make it easier for people to do the responsible thing now that we've kind of decided where that line is? So I'm curious about your take on that like do you agree with me?

And or do you have a different approach to that?

Upol Ehsan (07:17.032)
I think I partially agree with you. I don't think I fully agree in the sense that the issues that are happening here have both systemic issues as well as symptomatic issues. And we need to deal with both of them. So the harm that happened with Taylor Swift, there is an entire ecosystem.

that is socio-technical here, right? There is societal aspects of causing harm and bad fate actors using technology to do harm. I don't think that's new. But there are also systemic issues in the way we have built the AI enterprise. And from the word go, we really haven't put any notion of guardrails on it. Because initially when this deep fake technology was maybe a paper in a computer vision,

conference. I remember the first, there was a paper that also talked about how these deepfakes could be weaponized and very early on I don't remember a lot of true concern around this. So I think there is a systemic issue that as a society we'll have to deal with and be very honest with ourselves that you know this is the monster in many ways that we have created and now we have to deal with.

But then I also agree with you where I feel like there's symptomatic issues. So what are some of the symptomatic issues? The symptomatic issues could be that, look, now we have a spotlight. Now we have the momentum. Now we have agreement. This is the moment to capitalize. And I fundamentally think that that's true. We have to do that because, you know, not all issues will get time in fair amounts across all parties. And if we have a, if you have a moment where we can really make a difference, we should.

Shea Brown (BABL AI) (09:12.064)
Yeah.

Upol Ehsan (09:12.236)
I wanted to ask on your side, are there other examples on deepfakes and this notion of guardrails? Are you aware of any regulations maybe on the EU side that are trying to address it?

Shea Brown (BABL AI) (09:26.458)
Oh, yeah, I mean, deep fakes. Yeah, they were in for sure in the EU AI act, but I, I don't think that. I don't think that there's a will wait to see what the final version is. And I think I hear rumors that it's that there's been less emphasis on deep fakes as the as being revised, but actually a lot of local like at the state level.

I think that almost every day I hear, and partly this is because of the election coming up and this Taylor Swift thing, is that there's a number of states that have proposed regulations which are going to prevent deepfakes being used in a particular context. And in particular around elections, trying to manipulate and misrepresent candidates and things like that.

We're also gonna see regulations now after the Taylor Swift thing about using it to sexualize people or to non-consensual imagery that is inappropriate. And so I think locally that's gonna happen and the EU will wait and see. But I think that is what, that is in some sense, that's the worry is that that's not society necessarily weighing in a measured way.

Upol Ehsan (10:43.968)
Hmm.

Shea Brown (BABL AI) (10:54.434)
That's like rapid response to a panic, a panic button that they something happened, people are freaking out. And I think we need to, now that we have the spotlight and the conversation on this topic, let's get some eyes from the responsible AI community on this to weigh in now on what's the measured approach to this and then try to solve this symptomatic issues here.

Upol Ehsan (10:58.008)
Mm-hmm. Yeah.

Upol Ehsan (11:13.92)
Hmm.

Shea Brown (BABL AI) (11:23.45)
recognizing the fact that of course you're right, there are systemic issues as well that go across, I mean, political divide, it goes across power imbalances and it goes across the fact that we are not regulating the internet the way that we probably should be or could be to make sure that people are protected. And that's a problem.

Upol Ehsan (11:46.144)
Your point about the local disinformation campaigns brings up a very interesting topic of like post-truth, right? We almost are living in this post-truth era. And sometimes the demarcation between reality and fantasy or synthetic fantasy is just so little that we can't even make the distinction. For example, the other day I was trying this new AI platform where I just gave

I think 30 seconds of my video. And then it translated me into five different languages that I don't speak, including lip movements that are very realistic. And to be honest with you, if it wasn't me, I don't know how I would even see the difference, right? And there are implications of this, for instance, right now with especially cybersecurity.

Banks are now becoming very careful because now you could mimic someone's voice so clearly that voice signatures over the phone banking, that becomes a massive issue. I know many people, what they have done is they have started putting holds on their account that any kind of transaction over a $500 amount or something like that, I will need to be in person. In this post-truth era, I'm curious to hear your thoughts on...

What does it mean for us and especially responsible AI? Is there anything we can do to put normal people's lives, banking is something we all do, who doesn't do online banking? But what can we do?

Shea Brown (BABL AI) (13:26.77)
Yeah, I mean, that's the big question. I think that we need to put our efforts on really trying to solve those problems. And one thing I think that we can do, that's sort of a meta thing, is to start steering the conversation towards productive solutions. So just like overall, exactly like you're doing, here's a specific problem. People are gonna get their voices imitated and then,

They're gonna try to get withdrawals from the bank. How do we solve that problem? And that's, we need to be engaging in that kind of conversation as opposed to sort of the sensationalist conversation around it. But I think one clear thing is we can talk about calibrated trust. Calibrated trust is, you know, in a post-truth era, the way in which we trust our interactions online.

is going to change and how do we calibrate that trust? And we have to calibrate in kind of a risk-based way. And that's part of the conversation we have to have as responsible AI practitioners is helping clients navigate that landscape, helping the public understand what those risks are as well, from the more advocacy point of view of our work. And educate the policymakers.

on what is the appropriate or how do you think more carefully about the appropriate response as opposed to the reactionary response. And to your point, you got to bring in systematic. No, to your point, like you have to talk about the systematic problems too because we can't rely on it happening to Taylor Swift next time to do something. We have to be thinking in the future.

Upol Ehsan (15:04.188)
You're... yeah, sorry.

Upol Ehsan (15:16.636)
Yes, and your point about this proactive nature reminds me of some recent work that people are trying to do. I know like some folks at Hugging Face are trying to do this is watermarking, like let's say AI generated images. And there has been a very another body of work that shows how that is nearly an impossible task to do it robustly. So, you know, for viewers who are listening, you might see a lot of tools that tell you, oh,

upload the text on this website and see if it was AI generated or not. We are here to tell you most of those are wrong. So don't trust them. And they often harm people who are non-native English speakers. So teachers, if you're listening, please don't use them to see if someone has plagiarized using an AI system. They will likely get it wrong. Similarly, if there are images or even videos or even voice, right? Like OpenAI just released Sora.

which is the text to video tool. And the fidelity is insanely good. Like there is some object permanent issues, but like the fidelity from, I don't know if you guys remember, Will Smith eating spaghetti video where it was all warped up and it was really hilarious. I kind of miss those days where AI was just weird. I think now AI has just become so realistic that it's uncanny valley. It doesn't even exist anymore. Like...

Shea Brown (BABL AI) (16:18.304)
Yeah.

Shea Brown (BABL AI) (16:40.279)
Yep.

Upol Ehsan (16:40.668)
We don't talk about uncanny valley in the post-Truth era, right? Because that uncanny valley seems to me have been bridged. And so speaking of watermarking and kind of watermarking images, how do you think, like, what are your thoughts on that?

Shea Brown (BABL AI) (16:49.185)
Yeah.

Shea Brown (BABL AI) (17:03.698)
I mean, I think we need to try to do it, right? So no matter what, we have to try mitigation technologies. So unless someone says that they've, you know, theoretically have derived that the end state of this is that will never ever work, then I think we still will try to do that because there needs to be some effort to try to.

to certify that something is real or not, to the extent that it's possible. Now, I don't think, I think that there's a real possibility that the end state of that is that you can't do it. Unless you impose some regulations, which would require that you simply cannot post content of the type that doesn't have a watermark. And so that's why we need people to weigh in on what kind of internet do we wanna have.

Is it going to be unconstrained? In which case, you have to not necessarily believe what you see or hear at all. And you have to take it with a grain of salt and understand that the internet is the Wild West, which it is to a large degree now, and try to glean what is useful from it or glean from trusted sources. Or do you want to constrain the internet such that it's a more trustworthy place?

But that would inhibit probably a lot of, uh, probably some free speech, but it would also just, it would inhibit the, the innovation that could happen. And that's a, that's something that we can't decide on that. We, the, the collective, we have to just kind of decide on that.

Upol Ehsan (18:46.068)
Yes. And the collective weasel process itself is kind of TBD, right? Like what does that process look like to get collective input when we think about governance? And, you know, sometimes people say participatory design would be a good choice, but then participatory design at a scale is very difficult, right? We know with democracies, participation at scale is very tough. So even with nation-state level powers behind them, if we can't do it, what are the chances of?

doing it at a non-nation state level? Or what would that look like in a global scale? Before we wrap up, I do want to touch on one thing, is you mentioned about what kind of internet do we want? And I want to talk on the topic of en-shutification, right? Because before 2022, I think, we could reliably say that most of the internet was organic. In other words, mostly human-generated content.

after I think March of 2022, roughly speaking, that assumption is no longer true. So we're flooding it with not only just human-generated content, but AI-generated content, so synthetic content. And that means everyone is producing volumes of content at speeds unimaginable. In fact, the videos editing software that we use allows us

to somehow generate a blog post from the transcript. Quality is absolute crap, but it is there. But now we never use it, because we have standards at Irresponsible AI, but that doesn't mean people won't. And I think there is this other part of the post-truth era that there is more in-shootification and bad content out there.

Shea Brown (BABL AI) (20:33.254)
Yeah, for sure. Absolutely. And yeah, go ahead. Yeah, I mean, this, I think we're seeing, I'm seeing this all the time in terms of like emails I'm getting, videos that I see, like YouTube videos that are clearly just auto generated somehow with robot kind of narrations over the top. And, you know, the quality will get better. Right. And so right now it just

Upol Ehsan (20:35.005)
and

Now go ahead.

Shea Brown (BABL AI) (21:03.198)
makes all of us more wary and more dismissive of content. And so it's that sort of like just move past it because it's clearly crap. But at some point, it's going to get higher quality. But then you have the sort of like, I don't know, what was it called? Autophagy, the idea of like the algorithms essentially consuming their own content and the degradation of the models of that happens.

Upol Ehsan (21:25.76)
Huh.

Upol Ehsan (21:29.566)
Yes.

Shea Brown (BABL AI) (21:32.986)
it, you know, there needs to be new, new material that it ingests if it wants to. I mean, if the end goal is to sort of be more and more human like now, if you just want it to evolve on its own into some, something else, that's a different story. But in any case, I think we do we have any kind of guidance? What's if we want to kind of wrap this up in a way that's productive and not just doom and gloom? Like, what guidance might we have?

Upol Ehsan (22:00.018)
I think I do. And I think first of all, in terms of actionable takeaways, I think what we have realized is first of all, we shouldn't wait for disasters to happen to famous people for us to act on them. Right? So that's number one. These issues are important, even if they're not famous. Second of all, when we try to solve them, we have to make sure that we're attacking the problem from both sides. From the symptomatic side.

as well as the systemic side. And the solution to all of these problems actually are both not either or, because the problems are not just technical, they're socio-technical. And this kind of ties back to the previous episode where we have done about, you know, how there is a lot of hype in responsible AI and how some of the socio-technical solutions are the way forward. So with that in mind, I think we'll conclude this episode. Thank you for listening. And if you haven't,

Shea Brown (BABL AI) (22:49.535)
Yeah.

Upol Ehsan (22:56.564)
please hit that subscribe and like button. Thank you so much for joining.

Shea Brown (BABL AI) (23:00.77)
Thank you.


Taylor Swift Deepfakes: what happened
Does disaster need to strike famous people for us to move the needle?
Does disaster need to strike famous people for us to move the needle?
Disagreement! Deep fakes have both systemic and symptomatic causes
Deep fakes, Elections, EU AI Act, and US State legislations
The post-truth era powered by AI
Watermarking AI generated content and the difficulty
The enshittification of the internet
Three actionable takeaways