The independent news website of The Kent Stater & TV2

KentWired

The independent news website of The Kent Stater & TV2

KentWired

The independent news website of The Kent Stater & TV2

KentWired

Follow KentWired on Instagram
Today’s Events

AI-generated songs fuel ethics debate

Artificial+Intelligence+generated+the+image+for+this+story.
Artificial Intelligence generated the image for this story.

As access to generative artificial intelligence expands, AI-generated songs top the trending page on TikTok.

The genre has billions of views on the platform, and clips of famous characters, personalities and musicians covering songs flood the app’s “For You” page. While none of these performances actually happened, real people with access to widely available software are producing them.

Adam Fneiche, a sophomore visual communication design major, has created many of these viral songs and uploaded them to his TikTok account. His most viewed video with over 1 million views features Nickelodeon character Squidward covering the My Chemical Romance song “I’m Not Okay (I Promise).”

“At first it was the novelty, and then it kind of turned into this unhealthy obsession,” Fneiche said.

Fneiche’s interest in AI started in January, when he saw articles on Ariana Grande AI covers. From there, he asked himself how easy it would be to create an AI cover and joined an online community where he could learn the process.

“It’s a lot of teenagers,” Fneiche said. “We’re all people who want to push this technology to its limit.”

Fneiche uses a technique called retrieval-based voice conversion (RVC), which uses a deep neural network to transform the voice of one speaker into another. When making covers, he inputs a high quality recording of the voice he wants to replicate and an acapella version of the song he wants the voice to cover. Essentially, one only needs two audio files to make a voice say or sing anything they want.

RVC programs are widely available on websites such as GitHub, where Fneiche downloaded the software. The company, along with several other open-source platforms, recently released a policy paper suggesting changes to the proposed European Union AI Act. The act would be the first comprehensive legal framework for AI and proposes regulating AI models based on its perceived risk level.

The companies in the policy paper stated open-source software encourages innovation and regulating small developers of high-risk models could restrain progress.

Maura Grossman, a research professor with the David R. Cheriton School of Computer Science at the University of Waterloo who studies AI ethics, cautioned against the public rollout of generative AI.

“What is different about generative AI than some of the AI tools is the tools that were made accessible to the public were relatively easy to use, so anybody can use them,” Grossman said. “A lot of these tools have been shipped and put out without maybe sufficient validation or testing to make sure that they didn’t do dangerous things.”

Grossman pioneered the use of technology-assisted review, which uses supervised machine learning to infer the relevance of electronically stored information in legal discovery. She separated AI ethicists into three camps: At one end are the techno-solutionists, those who think AI is the answer to most problems; in the middle are those who believe AI has positive and useful aspects but also concrete and present risks; and at the other end of the spectrum are the existential risk folks who believe AI can grow so powerful it threatens humanity. She places herself in the middle.

“Tools aren’t ethical or unethical,” she said. “These are tools, and the tools can be used in helpful ways, and the tools can be used in harmful ways.”

Grossman questioned whether AI generated music infringes the intellectual property rights of artists, as well as the ethics of using someone’s voice without their permission. As more people have access to these models, she also raised the possibility of scammers scraping voices to use in extortion calls.

Fneiche called his projects silly and said he often gets requests from fans who want to hear their favorite artist sing other songs they enjoy. If an artist requests their voice not be replicated, he deletes any models he has.

Despite his popularity, which has earned him partnerships with AI software companies, he does not want to be thought of as an advocate for AI.

“AI should be a tool,” he said. “It should never be the final product.”

Alton Northup is a staff reporter. Contact him at [email protected].

Leave a Comment
More to Discover
About the Contributor
Alton Northup, Campus Editor
Alton Northup is a junior majoring in journalism. This is his first semester as a campus editor, and he is excited to welcome new reporters to KentWired. He previously worked as a staff reporter. This past summer, he interned for The Chautauquan Daily in western New York. Contact him at [email protected]

Comments (0)

Your email address will not be published. Required fields are marked *