Skip to Content

Artificial Intelligence: The New Wave of Content Creation or an Ethical Nightmare?

Back to News Listing

Author(s)

RadioEd

Podcast  • News  •
radioed

RadioEd is a biweekly podcast created by the DU Newsroom that taps into the University of Denver’s deep pool of bright brains to explore new takes on today’s top stories. See below for a transcript of this episode.

Generative artificial intelligence programs have piqued the interest of the online content community lately, with the popularization of AI software, such as Lensa and ChatGPT. The proliferation of AI could mean trouble for artists, who claim that the software used by AI “learns” from their original works of art on the internet to potentially create new works of art, drawing content from their work. This issue, although not necessarily illegal, still raises a lot of questions as to whether it is ethical to use AI to make art, and whether one could take credit for the AI work.

In this episode, Emma chats with Viva Moffat, DU law professor; Kerstin Haring, a computer science professor at DU; and Débora Rocha, RadioEd’s production assistant, about the history of AI, what it is currently being used for, and whether its use is ethical or not. Emma also talks about using AI software to write essays and get help with exams at universities, since the use of AI is growing rapidly among students.

Show Notes:

Viva Moffat is a law professor at the University of Denver. She is also the co-director of the Intellectual Property and Technology Law Program. Moffat began her academic career at the University of Denver’s Sturm College of Law as an assistant professor after seven years in private practice and one year as a visiting professor. While in law school  at the University of Virginia, Moffat was editor-in-chief of the Virginia Law Review. After graduation, she clerked for Judge Robert R. Beezer of the U.S. Court of Appeals for the 9th Circuit in Seattle. Following the clerkship, Moffat was an associate at Keker & Van Nest LLP in San Francisco. After moving to Colorado in 2000, Moffat’s practice focused on intellectual property litigation and transactions, sparking her interest in teaching and writing in the field of intellectual property and contracts. 

Kerstin Haring is an assistant Computer Science professor at the Ritchie School of Engineering and Computer Science at the University of Denver. She works as the Director of Human Robot Technology Lab and is affiliated with the Institute of Electrical and Electronics Engineers. Her research in Human Machine Teaming directly benefits the military and is in line with the Third Offset Strategy of the DoD.

Débora Rocha is a production assistant at RadioEd. She is currently on her last year at the University of Denver, majoring in Film Studies and Production with minors in Theatre and Marketing. The short film she directed, Party Quest, which was a product of her major’s capstones, has gotten international attention and awards.

More Information:

Lawsuit Takes Aim at the Way A.I. Is Built: https://www.nytimes.com/2022/11/23/technology/copilot-microsoft-ai-lawsuit.html

Lensa, the AI portrait app, has soared in popularity. But many artists question the ethics of AI art: https://www.nbcnews.com/tech/internet/lensa-ai-artist-controversy-ethics-privacy-rcna60242

An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy: https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html

AI art app makes waves on social media and conjures a tsunami of controversy: https://www.morningbrew.com/daily/stories/2022/12/09/ai-art-app-makes-waves-on-social-media-and-conjures-a-tsunami-of-controversy

Explained: The controversy surrounding the AI-generated artwork that won US competition: https://www.firstpost.com/explainers/explained-the-controversy-surrounding-the-ai-generated-artwork-that-won-us-competition-11188431.html

Listen and Subscribe

Transcript

Matt Meyer:

You're listening to RadioEd...

Emma Atkinson:

The University of Denver podcast...

Matt Meyer:

We're your hosts, Matt Meyer...

Emma Atkinson:

and Emma Atkinson...

Emma Atkinson (field audio):

Okay, so I’m at OpenAI in their beta… [typing noises] “Write a podcast intro about the use of artificial intelligence to create art and other content.”

Here’s what it says.

“Welcome to the AI Creatives Podcast, where we explore the fascinating world of artificial intelligence and its potential to create art and other content. From AI-generated music to AI-generated paintings, we’ll be discussing the implications of this technology and how it’s changing the way we create and consume art. We’ll also be talking to experts in the field to get their insights on the current state of AI-generated art and what the future holds. So, join us as we explore the creative possibilities.”

Emma Atkinson (VO):

Okay, that's not actually quite what this episode is going to be about. But we will be exploring this new world of AI content creation – how it works, where it's being used and if it's even legal.

Oh, and thanks to OpenAI for helping me write that intro.

It seems like everyone’s talking about AI right now—particularly generative AI, which is AI that “learns” by identifying patterns and generating new content based on those patterns. It’s a little more complicated than that, but we’ll circle back to some computer science fundamentals later.

You probably saw at least some of your friends posting AI-generated portraits of themselves to social media in the last few months, with their appearances digitally altered in some pleasing or grotesque way. That was likely through Lensa, an app where users can pay to have an AI model create portraits of themselves with different themes.

Lensa, ChatGPT and DALL-E are all examples of generative AI entities. DALL-E and Lensa both produce images, while ChatGPT generates text. ChatGPT is owned by OpenAI, the company that I used to help me write—I say write very loosely here—the introduction to this podcast. And there are other AI models that can write code. 

But because all of these AI models make use of existing content (art, essays, papers, etc.) to generate “new” content, it begs the question: Who owns AI-generated content?

Before we get into that, I wanted to take a look at the history of AI, so I asked University of Denver computer science professor Kerstin Haring to help me out.

Kerstin Haring (2:28):

Starting from the beginning, right, so when we're talking about AI, or artificial intelligence, it has been around for a lot longer than people think. So the first idea that came up is like, well, something that is basically not human or not animal or not pet intelligent. In that sense, it is artificial, that machines are doing that. And then fast forward a little bit through the science fiction's idea of artificial humans are these kind of robots and what they might look like or do. It was around the 1950s, that Alan Turing came up with the idea that machines might be able to do something that is very similar to what humans can do.

Emma Atkinson (VO):

Alan Turing didn’t get very far, Haring says, because of the limited ability of machines to save code. And until we were able to develop the technology that led to more robust computational power, AI was kind of at a standstill.

Until it wasn’t. In the 1990s and 2000s, scientists achieved many AI milestones, including the defeat of chess grand master Gary Kasparov by IBM’s Deep Blue in 1997.

Now, AI technology is used in ostensibly every industry. Self-driving cars are AI. Amazon’s Alexa is AI. Customer service chatbots, facial recognition, autocorrect. AI, AI, AI.

Kerstin Haring (3:55):

It's in the news because it finally works in a way that we take interest and maybe pleasure, or enjoyment, or wonder from it, that it actually can do that.

Emma Atkinson (VO):

Perhaps one of the reasons why we’ve become so entranced with generative AI in particular is that is seems to be making something out of nothing—creating new content seemingly out of thin air. Well, that’s not actually how it works.

Kerstin Haring (4:23):

We enter some text prompts or an image, it thinks a little bit, it does something, it seems like it's creating something new out of nothing, right? And that is very much how it works for us. That's why we're so fascinated by it, on what is happening right behind this curtain. That is why it's so hard to explain these things because it gets a little bit more complicated. So very basically, what all of these things have in common is that they're based on something what we call machine learning. So very basically sophisticated statistical models that we apply to huge data sets. And from these data sets, machine learning can learn things like patterns, it can combine new things it can look at, well, what is this combination of pixels, or this combination of image and words, and all of these things. So it can basically extract information, and it can do this with so many pictures. And it basically creates these groups and categories. So it's just out there. And looking at all of these things at the same time and detects if there's a pattern in the data, it very basically picks up on that. So that is the big thing that is happening behind the curtain. And then the different technologies that we see around, they use slightly different kinds of these of these machine learnings.

Emma Atkinson (VO)

Those data sets from which the AI models pull are made up of billions of pieces of content, all of which have been neatly collected by web crawlers.

Emma Atkinson (interview, 6:00):

So let me give you a hypothetical. I'm an artist and I have a website. And on this website, I post digital images of my art, right? Is there a chance that because my images are out there on the internet that they will get picked up by one of these web crawlers and then used in AI art?

Kerstin Haring (6:18):

Yes, there's a good chance, right? There are some legal restrictions on what one should web crawl and not. But if they are in the end obeyed, it's very hard to say, right? Because we talking again, right? It's a data set of 5 billion images. And then either looking at them or searching in there, it's really hard.

Emma Atkinson (VO):

Is it okay for people to use AI models to create art? Is it ethical? Is it legal? What if the dataset the AI is pulling from includes original art made by other people –  which is likely. Who then owns the AI-generated art? Can artists and creators stop their work from being part of the billions of pieces of content that AI models pull from?

It can be a little strange to see a computer doing what previously used to be done only by humans. Our production assistant, Debora Rocha, experienced it firsthand. She’s a DU student and a successful screenwriter and director.

Emma Atkinson (interview, 7:11):

So, you were telling me about how your friends took your script and put the idea in ChatGPT or one of these AIs. And it came out with something very similar to your own work, right?

Debora Rocha (7:26):

Yeah. So what happened was, they were looking at their own script. And they tried to- they put kind of like the logline for their story in the ChatGPT chat for the AI to make some kind of redo of the script. So I think they wrote, “write a script about this.” And they wrote like the logline for their film. And they were like, Okay, this is cool. And then they decided to do kind of like the same thing for my film. So I don't think they knew my logline. So they kind of wrote like a few, a couple of sentences, I think, just about, like the idea that I had. And they were like, “write a script about this.” And the AI came up with a very short script, I feel like maybe one or two pages, it wasn't as long as the script that I was working on the capstones last year. But it was like, incredibly similar to what I wrote.

Emma Atkinson (interview, 8:26):

That must have felt so weird. If you could sum it up, the way that you felt seeing that in one word, what would it be?

Debora Rocha (8:33):

That is a very good question. I think outsmarted.

Emma Atkinson:

Yeah, that's a good one.

Debora Rocha:

I read it and I was like, "maybe I am AI.” You know what I mean? Like, maybe we think the same. Like this computer has very similar brain process to how my brain processes it, because my story had a narrator, they had a narrator. We are thinking the same way for the story, very similar way to start the story, similar ideas of how a story like that would exist. So does AI know of my story? You know, is there a way that my script is online and I don't know about it? Or maybe my film is online, and maybe AI already knows of what I've done? And maybe that's where it came from? Or is it like that I think the same?

Emma Atkinson (interview, 9:25):

Or maybe it's that you both, you and the AI, are pulling from existing scripts, right? So like all movies that you've watched, and all of the experiences that you've had with film, the AI is pulling from similar, I don't want to say experiences, but similar content that you've also seen, right? Which is weird, because it's like, AI is not a person, but they're pulling, you know, from the same “memories” that you also have of film and content.

Debora Rocha (9:56):

Yeah, it's like what my mom tends to say to me about ideas, and stories and creativity. She says that nothing is created, everything is copied and changed. This idea of like, all the ideas already exist. And the new ideas come from other ideas that just become new because you based it off of what you already know, and just come up with something that works from that.

Emma Atkinson (VO):

AI content creation… ethically dubious, right? At least as it exists currently. But is it even legal?

University of Denver law professor Viva Moffat says that’s a complicated question with an even more complicated answer.

Viva Moffat (10:34)

We don’t actually know exactly what it’s doing. What we do know is that the AIs are essentially scraping data off of the internet, at least publicly available data from the internet. And so there's no, as I said, one answer to “is it legal or not legal,” because the range of possible stuff that’s out there, things that are protected by copyright, things that are in the public domain. So  that is not protected by copyright or any other kind of law. But there could be- I don't know the extent to which it can get at sort of non-publicly accessible sites, I just don't know the answer to that.

So, you know, it's certainly possible that it's grabbing from things that somebody thinks are at least confidential or private or proprietary. I don't know the answer to that question. It also well, so, we don't exactly know everything that it's picking up. And so that's one thing. We don't know whether- Is that okay? So your question is, is that okay, for it to do that? Well, you know, it's sort of to some extent- that’s a real law professor answer, but it depends.

And so, for example, there are lots of times when it is okay to either, you know, under copyright law, or sort of other doctrines, to take existing works and use them to create something new. And you might have to think about both whether the copying or using of the work itself is copyright infringement or otherwise wrongful. And then whether the thing that you create is infringing or not is also a question that in copyright law is often resolved under the fair use doctrine, which is a very fact sensitive kind of determination.

Emma Atkinson (VO):

So the law is a little fuzzy when it comes to this stuff, right? But do artists have any recourse, any legal right to say, “Stop using my art?”

Viva Moffat (14:42):

Well, in a lot of ways, no, that's not a thing that artists can do. Artists might feel that, they might say, I don't want my work being used in this way. There are ways in which copyright might vindicate that concern. So there are ways in which they could say, “oh, yeah, if you were using it in a way that is infringing, and is not fair use, then maybe I could bring a copyright,” you know, that might be a successful copyright lawsuit, they can't if-I guess the way to say this is- they can't if something would otherwise be fair use, an artist can't prevent that. Sort of the point of fair use is that it allows users like me, if I want to go create a new mash up work of something, you know, that if what I'm doing is fair use, I don't have to ask for permission.

So, you know, the law doesn't want to stop all kinds of copying and reuse, just certain forms of it. So I don't- that doesn't mean that some of these creators might not ultimately have some claim or cause of action that they could bring against the AIs. But having a copyright doesn't mean you get to control every use of your work. What I meant was sort of like, I think that some of the concerns that artists have are perfectly understandable. And partly, I think, it’s not like we can know, “oh, this is what's going to happen or what will be okay, and what won't be okay,” because there's so much sort of unknown about how this is going to proceed.

Emma Atkinson (VO):

There is one example, though, of someone trying to take on the legality of generative AI. A Los Angeles programmer named Matthew Butterick, is suing Microsoft, claiming that the company is essentially committing piracy with its AI model CoPilot, which generates code using open source software data. The New York Times reports that the lawsuit is believed to be the first brought in the name of protecting original content and data from generative AI models.

Butterick said to the Times, “The ambitions of Microsoft and OpenAI go way beyond GitHub and Copilot. They want to train on any data anywhere, for free, without consent, forever.”

Viva Moffat (11:06):

So this lawsuit says, “no, no, you can't do that. That violates the terms of our licenses, that you can't take this code, use it and then charge for the product, you can use it and give it away.” But you can't use it and charge for it. So I think that's an example of one way in which some group of creators have something here- code, not visual art- might try to challenge at least aspects of what's happening.

Emma Atkinson (interview, 15:47)

I wonder if you could, you know, what advice would you give to someone looking to create something using AI? Is there any, is there anything, any warning that you would ask them to heed in terms of the legality of using some of these technologies?

Viva Moffat (15:40):

I mean, it's, you know, I don't want to be a person who's saying, “oh, we have to just stop.” And, you know, I'm not a technophobe. And in fact, the history of copyright law is a history of incumbent established industries freaking out about new technology. I mean, that's why we have copyright law in the first place. The printing press came around, and the monks and the other people who were sort of, you know, had a monopoly on knowledge and knowledge dissemination, freaked out.

And you know, that's just happened over and over again, the player, piano, radio, TV, copy machines, the internet… bit torrent. I mean, AI may be a quantitative and a qualitative change, but it may just be a quantitative change. It may just be “oh, this is we've dealt with this before, and we're gonna deal with it in the future.” And this is a bad thing for a law professor to say, but the law is not very good at getting ahead of this. The law is behind technological change and business model change, and is more reactive than proactive.

Emma Atkinson (VO):

Beyond art and computer science, generative AI is also already making waves in the field of education, with teachers and professors being urged to be on the lookout for students using ChatGPT or other models to complete their schoolwork.

Emma Atkinson (interview, 17:04):

In your personal experience, have you encountered any students using AI in your courses?

Viva Moffat (17:19):

I don't know. I mean, I don't think so. But I mean, I think I say that because like- I'm not gonna say it's impossible. But I give open book, open notes exams, I tell students that they can use, you know, any materials that they want. I tell them that they cannot give or receive assistance. And so I did clarify with them, I think that included AI, even without me saying it, but this year, this semester, I did say, you know, you can’t give or receive assistance, and that includes the assistance of any non-human AI or other technology.

Emma Atkinson (interview, 17:48):

So you felt that it was necessary to kind of add that as a little afterthought.

Viva Moffat (17:56):

In the interest of clarity, I think we have to assume that our students will know about and use and be better at using AI than I will be. I mean, my personal view has always been like, I don't want to put students in a position where they feel like, “oh, it would really help me to get assistance in some way.”

Emma Atkinson (VO):

Obviously, ethically, using AI to create schoolwork and passing it off as your own is a no-no. But people aren’t always going to follow the rules—especially when those rules don’t necessarily have any clear legal consequences.

Moffat has some words of advice for educators.

Viva Moffat (18:35):

I think that is the worst response we could have is to just pretend it's not happening or hope that it won't keep developing. So I mean, that's why I talked a little bit about like, well, let's, you know, maybe I'll talk to my students about how they might want to use AI. And if there are ways in which it could be incorporated into teaching, rather than us pretending that it's not out there.

And I think more broadly, that's what you know, sort of policymakers need to be thinking about too, is how to harness it for some good and try to think about what might happen and where we might need to step in with regulation or some response. So, but I think it is a little bit unpredictable.

Emma Atkinson (VO):

Thanks again to our guests, University of Denver professors Kerstin Haring and Viva Moffat, and to our production assistant, Debora Rocha. For more information on their work and the sources used in this episode, check out our show notes at du.edu/radioed. Tamara Chapman is our managing editor. James Swearingen arranged our theme. I’m Emma Atkinson, and this is RadioEd.

Related Articles