AI Revolution

The future of the Internet is here and many fear it. Will Generative AI take the bread out of content creators’ mouths?

Computer Hope Guy
Foto: Shutterstock

Abonează-te la canalul nostru de WhatsApp, pentru a primi materialul zilei din Panorama, direct pe telefon. Click aici


 

The web browsing experience is changing forever, thanks to Artificial Intelligence. Search engines will serve us ready-processed information, even though AI models are trained on content without consent. To survive, the media system and content creators need to rethink their approach.

  1. The integration of AI into search engines has turned the technology into an “information filter” that gives us the information ready-processed. It’s a development that has led to numerous problems between AI companies and content creators. Large media entities as well as authors and celebrities have sued OpenAI or other companies for using their content without consent to train AI models.
  2. Popularizing Generative AI can bring risks of disinformation and manipulation. If users no longer need to go to websites and read a compilation of AI-generated information, it means that they no longer get to consult the primary source of that information. On top of that, AI technology can also be used to generate deepfake, and software bugs (called hallucinations) generate false information that the public isn’t always aware.
  3. Companies like Amazon still have work to do on the rules and policies they use to handle AI-generated situations, and some book authors found that malicious people tried to commit fraud with their names and used AI-generated books that the authors had not written. Because there is no clear distinction on the sites, users may end up buying fraudulent AI-generated content unknowingly.
  4. The solutions to balancing the relationship between AI companies and the media or content creators lie in regulation and setting fees for tech companies that use media sites as sources to train AI systems. In the EU, the AI Act came into force on August 1, bringing a series of clear rules and obligations for how Artificial Intelligence can be used.
  5. The future of the media system lies in financial sustainability and avoiding over-reliance on tech platforms, otherwise we risk a collapse of the information system fueled by software glitches and big tech’s run for profit.

Content creators face off with AI technology

Google has introduced Generative AI into its search engine through Gemini AI, its Artificial Intelligence solution. For users, this means they no longer need to enter search results sites, as the AI system delivers the summarized information directly to their Google search page.

At the end of July, OpenAI, the company behind ChatGPT, also announced that it was launching SearchGPT, a prototype of its own AI-powered search engine. The company has entered into a series of partnerships with media entities such as German media groups Axel Springer and Bild, as well as content creators, on the basis of which it is training its AI system to provide more accurate information.

Also in early 2024, Microsoft became one of the major investors in OpenAI, nearly two years after integrating ChatGPT algorithms into the company’s products, including the Bing search engine.

All these developments translate to affecting the entire content creation industry, from media websites to other entities that rely on traffic for monetization. The consequences, primarily financial, are major.

News agencies, photo agencies and associations that defend writers’ rights have reacted strongly and have signed an open letter calling for stricter regulations on AI and how it uses content to develop its algorithms.

The New York Times has also sued OpenAI over the company’s AI models being trained on the media giant’s content without permission. Millions of the publication’s articles, including in-depth investigations, op-eds and news stories that the company invested resources to create, were used for training AI systems, the publication claims. In an article on its own website, OpenAI denies the allegations.

An investigation by the publications Proof News and Wired shows that companies such as Apple, Salesforce, Nvidia and Anthropic have also used online content without the owners’ consent, and the journalists’ investigation reveals that algorithms have been trained on content from some 48,000 YouTube channels.

Another high-profile scandal involved Scarlett Johansson suing Open AI after the company allegedly used an imitation of her voice in a ChatGPT update without consent.

the cost of training GPT-4, from OpenAI
0 mil. $
the cost of training Gemini Ultra, from Google
0 mil. $

AI can redefine information and economic systems

Peter Erdelyi, director of the Center for Sustainable Media (Hungary) and a specialist in the field, thinks AI’s role in the media space is a double-edged sword.

“I think it’s a threat from the perspective of AI being really good at creating content, and it’s going to be better at creating content”, Erdelyi explains, in an interview via Google Meet.

The danger, he says, arises when users receive ready-summarized content from AI, because they don’t always get to the primary sources of information, such as news articles.

“I think some of the content that is now created by news publishers is going to be displaced by content that is generated by AI. So, if you search for, okay, what is the best investment opportunity in Romania right now, maybe two years ago you ended up with an article by Ziarul Financiar or someone else, that is it created by a human, but now you’ll have Gemini and OpenAI, it’s a compilation of information that is given to you”, Erdelyi says.

Courtney Radsch, director of the Center for Journalism and Liberty at the Open Markets Institute (US), also predicts that AI will have a huge impact on the media industry.

“I think that we are at a really fundamental turning point with respect to the future viability of journalism and the news industry, and the development of Artificial, General Artificial Intelligence, and Generative AI, which is being driven by a handful of big tech companies, largely based in Silicon Valley, that are imposing a new economic system that intensifies datafication (note: transforming behaviors, activities or processes in quantifiable data) and is built on the labor and expressive content of others”, Radsch says in an interview via Google Meet.

“This entire AI revolution is being developed with content that was stolen from rights holders, whether we’re talking about journalism, music, art, code. All of this content that was turned into data and is being used to fuel AI, is not only already undermining the business models of a range of industries, most acutely the news industry, but it’s also creating elements of an entirely new economic system. It’s intensifying surveillance capitalism. It is built on the datafication of everything, literally turning everything, our thoughts, our emotions, our sentiments, our biological functions into data”, Courtney Radsch explains.
Radsch also adds that the intermediation of information content (e.g. news) by big tech platforms, be it AI or social networks, which have the right to censor, block or manipulate information, is dangerous.
 

“I think the fact that these platforms remain in control of the major information and communications infrastructure the contemporary era, on which media depend to publish, reach audiences, information – and now they are also leading the charge in AI – is really problematic. It seems like we have learned nothing from the social media era”, she concludes.

The growth of private investments in Generative AI in 2023, compared to 2022
8 x
Private investments in Generative AI in 2023
0 mld. $

Disinformation, manipulation and deepfake

Angie Holan, director of the International Fact-Checking Network, an initiative of the Poynter Institute (USA), which promotes fact-checking by creating standards in the industry, believes that we don’t yet fully understand how AI technology will change the media landscape. The problem arises because this technology can be used for good as well as bad purposes.

“We are seeing people who are doing political campaigns that use misinformation and use AI. So, we’re seeing things like deepfake audio. There have been multiple cases that have been covered in the mainstream media about people making fake audio messages that sound like you know, Joe Biden. So, they’re false messages with different purposes. Some of them tend to be like false endorsements, some of them have been like false information on voting days and times. So, just basically in any context where you might find misinformation, you can also find AI-generated misinformation”, says Holan.

A recent Recorder article shows that the AI-fueled deepfake phenomenon is also rife in Romania. Numerous fake ads created with AI are circulating on social networks, imitating the voices of celebrities, politicians or business people and using video snippets from various shows or podcasts to promote various get-rich-quick schemes and investment scams. Basically, the journalists demonstrate, using AI has created complex criminal networks that lure thousands of gullible users and leave them without money in their bank accounts.

Angie Holan recounts that there are also AI bots that post information at a very fast pace, and this creates a problem for news organizations, which can’t keep up with automated systems.

Even when there are no dishonest intentions at play, AI can still misinform the public. AI systems sometimes “hallucinate”, that is, certain algorithm errors provide information that is not real (most AI developers claim they try to minimize these software errors).

“From my point of view, one of the big problems with AI is that it does not have good safeguards for accuracy built into it. A Language Learning Model (note: LLM – the “learning” algorithm on which AI systems are built) works by language prediction, and when the AI models run into some sort of knowledge gap, they just flow in words. And so, we see, just recently, Google tried to roll out an an AI assistant and it told people to eat pizza with glue on it”, Holan adds.

Lack of clear policies against fake AI content harms writers

Jane Friedman, who has 25 years of experience in the publishing industry, is a writer and editor of The Hot Sheet, a newsletter about the industry. In the summer of 2023, she was contacted by a reader. He had discovered that a number of the author’s books were on sale on Amazon. Friedman didn’t know what the reader was referring to, so she checked the platform and discovered titles that bore her name but that she hadn’t written.

“I informed Amazon that someone was using my name on books I didn’t write. But since using AI to generate a book isn’t against the law, and there are many people with my name, Amazon didn’t immediately take any action” Jane Friedman says, in an interview for Panorama via Google Meet.

“It was only after some persistence that they finally took the books down, when it was clear that someone was attempting to confuse consumers and perpetuate fraud”, she adds.

Her books were listed not only on Amazon, but also on the author page on Goodreads, an Amazon-owned site that is a kind of social network dedicated to books and reading enthusiasts.

“So, it didn’t match the sort of writing I typically do, it didn’t match my voice or style. I don’t think that anyone took any care to make it look or sound like me. It was just creating content that could feasibly be associated with me, and then they put my name on it”, she also explains in the interview for Panorama.

Jane Friedman says she doesn’t know who used her name for the AI-generated books because Amazon didn’t give her that information and she would have had to file a lawsuit to get it, which would have been costly. The author limited herself to official channels to complain to Amazon, on which occasion she found that the platform has no procedure in such situations.

“They don’t have takedown requests that relate to this sort of fraud. They have copyright infringement and trademark infringement claims that you can make, but what happened to me is neither a copyright violation, nor a trademark violation. So, even though they did ultimately take it down, it was dispite of that, it wasn’t because of it”, Friedman says.

She also explains that the platform probably took down the books more likely as a result of the pressure and the fact that the issue had gotten into the media, which was hurting Amazon’s image, but says she received no further explanation from the company.

The future of cultural industries is uncertain in the AI era

As technology develops, AI systems will become more and more “chiseled” and capable of producing content that almost perfectly mimics human-created content. That worries The Authors Guild, an American organization with a century-long tradition of defending the rights of writers and authors.

Umair Kazi, director of advocacy at The Authors Guild, says one of the biggest concerns about Artificial Intelligence and its impact on content authors is the dilution of the content market.

“Generative AI can produce books and long-form works. They’re not always perfect, but they’re going to get better as Generative AI gets better. But what we’re concerned about is that those AI-generated works that are very quickly be able to produce will enter the market and they will displace works written by our members, human authors, in the sense that they’re quicker and cheaper to produce. A writer might take like a year to write a well-researched nonfiction book. But, with Generative AI, you could write a book”, Kazi says for Panorama, in a conversation via Google Meet.

He says the organization works with entities like Amazon, so he’s seen malicious users use AI-generated content on the platform. The main problem in this situation, Kazi points out, is that regular users don’t always realize that they’ve been tricked with this content. And even if the quality of AI-generated work is now not comparable to that of work generated by real writers, this could change in the future.

Another problem is that writers’ databases and content are being used to train AI models, especially since this kind of content is high quality for LLM systems, Kazi details. However, similar to news articles, writers often did not give their consent for their works to be used to train AI models.

“The books were just taken. They were not paid anything. And the result is you have a technology that can then basically kill the entire profession of writing, the craft of writing”, Kazi also says.

AI systems can be useful to authors if used correctly, and Kazi notes that he knows of publishers already using them to automatically write marketing copy.

“We’ve always said that Generative AI can be a very useful tool but there have to be legal and policy safeguards in place, to make sure that it is not used in a way that harms the human creative industry. So, what that means, the things that we’ve been (note: the organization) pushing for is if something is AI generated, it has to be disclosed that this is AI generated, so the consumer can make a choice between, like, buy a human written word or an AI generated word”, he adds.

Regulation and honest collaboration are the way to balance

Similar to what happens in the literary industry, the media industry also needs protection against abuses committed with the help of Artificial Intelligence. That’s why Courtney Radsch has created a framework for protecting and compensating journalists and media organizations for the use of their content by AI systems.

“The framework basically looks at the inputs into AI systems, the processing, and the outputs. And so, the framework looks at where is value created in journalism with respect to different parts of the AI system”, she says.

“It looks at archives, at data that is created in real time, and how those can be useful for different parts of AI systems, for example, developing and training Large Language Models or other foundation models relies on access in part to quality information”, Radsch continues.

As technology evolves, the expert says, quality content will become more and more valuable, and media organizations should assess the value of the content archive they have, as well as the content they currently produce.

Another part of the framework developed by Radsch concerns the “hallucinations” that AI systems can produce. In order to avoid this risk, AI systems need access to real-time or up-to-date information to allow fine-tuning of LLM models. This benefit of access to high-quality content can be provided by media organizations to technology companies.

“But then you also want to look at how do we establish the value. Is this about a one-time licensing fee? Is this about royalties? How can they think about the value, for example, of using these tools in their newsrooms?”, Radsch asks, in the conversation with Panorama.

What media organizations should do, she believes, is assess not only what benefits AI systems bring to newsrooms, but also what the newsrooms are offering in training these systems and how that value can be monetized, through the creation of licenses or usage rights.

Two other essential elements included by Radsch in the framework are regulatory and environmental protection. Clear policies and rules related to intellectual property rights and copyright, as well as intermediary liability, which does not currently apply to AI companies, would help avoid monopolies, abusive situations and environmental impacts.

For now, the European Union is at the forefront of regulating Artificial Intelligence. On August 1, 2024, the European Artificial Intelligence Act (AI Act) came into force, which aims to “encourage the responsible development and deployment of Artificial Intelligence in the European Union”, says the European Commission’s press release.

Essentially, the AI Act gives AI developers and implementers clear rules and obligations on how AI technology can be used. The set of rules also provides a way to assess AI systems based on criteria related to the risk to the public (from the lowest, such as AI systems that send spam messages, to the highest, such as AI systems that stand in the way of human rights by abusive use by various entities).

How to make peace with AI

In terms of the future of the relationship between booming AI and content creators, Peter Erdelyi believes there is a policy problem in general. Over-regulation, he says, will get in the way of finding an innovative solution, and the friction between the big AI companies and the media industry is moving far too fast.

Although no one knows exactly how the situation will evolve, Erdelyi thinks it’s important to create policies so that content creators and publishers can be rewarded and collaboration can be more constructive.

Courtney Radsch says AI will continue to be integrated into newsrooms through partnerships between tech companies and publishers. “I worry about these partnerships and these deals, because I think it’s once again creating a dependency handful of big tech platforms”, she explains.

One solution could be to work with smaller companies, start-ups or companies that honestly pay for licenses.

“I think they (note: the newsrooms) need to really think about the choices they make about the products that they integrate and how they reflect the values of public interest, and do not further their dependency on a handful of platforms, the way that their choices during the social media age did”, Radsch adds.

The future of content creators and media entities, Angie Holan believes, lies in their ability to secure sustainable financial models, especially as the quality of online content generally declines.

“Credible publishers, if they can’t preserve and build their revenue, they’re going to go out of business, and you’re going to see more junk publishers. And, then the whole, you can envision kind of what I would call an information ecosystem collapse. Like, if Google doesn’t have good search results to show people, what are they going to do? I think this is something that the tech companies have to think of really carefully, about how their business practices are impacting information quality and publishing”, says Holan.
Artificial Intelligence has set in motion social and economic changes that will redefine business models, human behavior and the labor market in the years to come. Beyond regulation, we also face a process of adapting to new ways of doing things. Even if now, in the midst of this technological revolution, it is hard for us to perceive its scale, the future is already here and can no longer be put on hold.
 
Story edited by Alina Mărculescu Matiș.

Ca să fii mereu la curent cu ce publicăm, urmărește-ne și pe Facebook.


Andreea Bădoiu

Andreea lucrează în advertising, dar rămâne iremediabil îndrăgostită de jurnalism, de oameni și de poveștile lor. Absolventă de Jurnalism la Universitatea din București, în 2013, a lucrat câțiva ani ca editor tech și apoi ca redactor pentru o publicație online, după care s-a orientat către industriile creative. Continuă să creadă că jurnalismul e cea mai frumoasă meserie din lume și că poveștile ne aduc împreună și ne ajută să fim. Speră să-și păstreze curajul să scrie mai departe și să documenteze subiecte care să-i ajute pe ceilalți să descopere perspective noi.


Abonează-te
Anunță-mă la
guest
0 Comentarii
Cele mai vechi
Cele mai noi Cele mai votate
Inline Feedbacks
Vezi toate comentariile

Abonează-te, ca să nu uiți de noi!

Îți dăm un semn pe e-mail, când publicăm ceva nou. Promitem să nu te sâcâim mai des de o dată pe zi.

    6
    0
    Ai un comentariu? Participă la conversație!x