The US government has noticed the potentially negative effects of generative AI on areas like journalism and content creation. Senator Amy Klobuchar, along with seven Democrat colleagues, urged the Federal Trade Commission (FTC) and Justice Department to probe generative AI products like ChatGPT for potential antitrust violations, they wrote in a press release.
"Recently, multiple dominant online platforms have introduced new generative AI features that answer user queries by summarizing, or, in some cases, merely regurgitating online content from other sources or platforms," the letter states. "The introduction of these new generative AI features further threatens the ability of journalists and other content creators to earn compensation for their vital work."
The lawmakers went on to note that traditional search results lead users to publishers' websites while AI-generated summaries keep the users on the search platform "where that platform alone can profit from the user's attention through advertising and data collection."
These products also have significant competitive consequences that distort markets for content. When a generative AI feature answers a query directly, it often forces the content creator—whose content has been relegated to a lower position on the user interface—to compete with content generated from their own work.
The fact that AI may be scraping news sites and then not even directing users to the original source could be a form of "exclusionary conduct or an unfair method of competition in violation of antitrust laws," the lawmakers concluded. (That's on top being a potential violation of copyright laws, but that's another legal battle altogether.)
Lawmakers have already proposed a couple of bills designed to protect artists, journalists and other from unauthorized generative AI use. In July, three senators introduced the COPIED Act to combat and monitor the rise of AI content and deepfakes. Later in the month, a group of senators introduced the NO FAKES Act, a law that would make it illegal to make digital recreations of a person's voice or likeness without their consent.
AI poses a particularly large risk to journalism, both local and global, by removing the sources of revenue that allow for original and investigative reporting. The New York Times, for one, cited instances of ChatGPT providing users with "near-verbatim excerpts" from paywalled articles. OpenAI recently admitted that it's impossible to train generative AI without copyrighted materials.
This article originally appeared on Engadget at https://ift.tt/hGNzwQbfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/hGNzwQb
No comments:
Post a Comment