Blog
Media giants like Rolling Stone have sued Google, marking the full-scale launch of the AI copyright battle
Rolling Stone, Billboard, Variety and The Hollywood Reporter have sued Google, claiming that it stole their content to generate AI summaries. This is the first lawsuit of its kind that Google has faced so far, but it is highly likely to become a watershed for the entire industry.
The parent company of these four Media outlets, Penske Media Corporation (PMC), officially filed a lawsuit against Google on September 12, 2025. PMC said that Google, a company that provides AI search and cloud computing services, stole their exclusive news content to train its AI digest, which Google calls the “AI Overview”.
Google, a major American multinational company, mainly engages in online advertising, search engines, cloud computing, software, quantum computing, e-commerce, electronic products and artificial intelligence. The British Broadcasting Corporation (BBC) even calls it “the most powerful company in the world”, and it is also one of the most valuable brands globally. Alphabet Inc., the parent company of Google, like Tech giants such as NVIDIA, Microsoft, Apple, Amazon and Meta, all belong to the “Big Tech” camp.
This lawsuit actually reflects similar issues in other countries. In Japan, publishers have accused AI search engines provided by giants like Google and Microsoft of infringing on Copyrights because the summaries generated by their AI tools are too similar to original articles. The Japan Press Association has even demanded that technology companies obtain the consent of media organizations and warned that AI-generated responses often copy news content without authorization.
In the lawsuit filed by the PMC against Google, the PMC claimed that Google did not offer publishers a fair choice. Because it takes advantage of its monopolistic position in the search field, it forces publishers to either have it generate AI overviews with its own content or never have it appear in Google’s search results. PMC stated that this led to a decline in their website traffic and a reduction in revenue as well.
The lawsuit filed by PMC against Google indicates that in today’s era when AI-generated content has become a part of our daily lives, more explicit regulations on copyright are indeed needed.
Although this is the first lawsuit of its kind against Google, it has also joined the ranks of an increasing number of lawsuits against AI system developers for copyright infringement.
This lawsuit by PMC seems similar to other AI lawsuits, but it is also quite special because it touches upon a very fundamental issue: Do people think copyright law needs to be redefined or revised? It also exposed the concerns of many publishers about the development of large language models (LLMS) and other generative AI (GenAI) systems.
The GenAI system makes it very difficult for publishers to maintain their main income through the subscription model any longer. This issue is not only present in this lawsuit but also in other lawsuits, such as the one where The New York Times sued Google’s rival OpenAI.
Bradley Shimmin, an analyst at Futurum Group, said, “The situation we are facing now is very complex.” This might seem like a good thing for consumers and content creators, but in reality, it could limit the ability of publishers, especially small ones, to operate successfully in the market.
Ai-generated summaries seem to facilitate consumers by providing them with quick summaries, but at the same time, they also reduce the website traffic that publishers rely on for survival.
Bradley Shimmin also said that the same is true for developers of AI models. Once restrictions are imposed on copyrighted information (which is good for publishers), it may limit the innovation ability of AI model developers.
Now, there are no rules in the AI market to regulate how AI developers use others’ content. However, some court judgments did mention this issue. For instance, in the lawsuit between Bartz and Anthropic, the defendant was accused of infringing the copyright of the books. However, a federal judge ruled that the “transformative use” of the books constitutes fair use, but the piracy of these books does not. However, there are also some judgments stating that the content generated by AI models is not protected by copyright.
Bradley Shimmin said, “So, this has plunged the entire market into a state of uncertainty and instability.”
He also added that publishers like PMC feel their business models are under threat because consumers have to choose between paid content and derivative content generated by LLMS. Given the complex structure of AI lawsuits, the PMC used similar language in its complaint to that used in antitrust lawsuits by the US Department of Justice, as stated by Michael Bennett, a law professor and vice president for data science and AI strategy at the University of Illinois at Chicago.
Michael Bennett referred to the lawsuit filed by the US Department of Justice, saying, “This indicates that the law firm cooperating with PMC most likely carefully studied the latest judgment of the Department of Justice before filing the complaint.”
He also said that the PMC lawsuit covers similar issues and draws on some conclusions from previous judgments, such as the federal judge’s determination that Google is a monopolist.
This complaint from the publisher is also significant as it indicates that PMC is embroiled in an unbeatable struggle over how to get its content discovered. If it wants its content to be found by Google, it must allow Google to use this content to generate AI summaries. If publishers are unwilling to participate in the AI overview, their content will not be given priority for display.
Michael Bennett said he wasn’t sure if PMC’s argument would hold water in court, as Google might counter that it was PMC itself that wanted to be indexed by Google.
He said, “Ultimately, we need to assess the value of this position and this argument, and then see how Google responds.”
While Google is facing a new round of copyright lawsuits, some researchers have also raised doubts about the claim made by AI companies that “it is impossible to train powerful AI models while respecting copyright.” In a study, a team coordinated by EleutherAI, consisting of researchers from the Massachusetts Institute of Technology, Carnegie Mellon University, and the University of Toronto, created an 8TB dataset composed entirely of open license and public domain content. They trained a Coma v0.1 model with 7 billion parameters using this dataset, and its performance was comparable to Meta’s LLaMA 2-7B model. This proves that it is entirely possible to develop high-performance AI without infringing on copyright law.
The Intellectual Property and Existentialist Crisis in the AI Era
This is a century-long battle over AI and human intellectual sovereignty, and its core is far from being simply summarized as a copyright dispute. The lawsuit between PMC and Google that we have witnessed is just the tip of this vast iceberg. As an expert in AI philosophy and ethics, I believe that this case reveals three profound philosophical propositions: the paradox of algorithmic hegemony and information freedom, the redefinition of digital labor and intellectual property rights, and the revaluation of the value of human creativity in the post-human era.
The paradox of algorithmic hegemony and information freedom
Google’s “AI Overview” feature is essentially a highly encapsulated and refined form of information by algorithms. On the surface, it enhances the efficiency of information acquisition and represents an advancement in user experience. However, from a philosophical perspective, it constitutes a fundamental change to the flow of information. The Internet spirit we once believed in was “decentralization” and “free flow of information”, but now, Google’s AI digest is forming a new algorithmic hegemony.
The lawsuit filed by PMC alleges “coercive selection” – either being indexed by AI summaries or being demoted in search results. This is no longer a simple business negotiation, but a compulsory control over the path of knowledge dissemination. Under this model, Google’s role has transformed from an “information indexer” to a “knowledge arbitrator”. It not only determines which information can be discovered but also decides how the information is presented and its value is distributed. This raises a core philosophical question: When a single entity holds the entry point for the vast majority of people to acquire knowledge, does true freedom of information still exist? AI abstracts may seem to save users’ time, but in fact, they are building a high wall, allowing users to explore the complex context, author’s intentions, and deeper thoughts behind the original text without having to cross it. This “one-step” knowledge consumption model may eventually weaken human beings’ ability to think critically and read deeply about information, which is a more serious cultural crisis than the loss of traffic.
The Redefinition of Digital Labor and Intellectual Property Rights
The traditional intellectual property rights system is based on “human creative labor”. Every report and every comment on PMC embodies the interviews of journalists, the judgments of editors, and the insights of authors. This is a definite embodied labor (embodied labor), which involves time cost, emotional investment and moral responsibility. However, when these labor achievements are “swallowed” and digested by AI models, they turn into abstract and non-embodied data points, serving as nourishment for AI’s “learning”.
The real challenge of this lawsuit lies in that it requires us to re-examine the definitions of “labor” and “creativity”. Does the training process of AI models constitute an “exploitation” of human labor? If an AI model can eventually generate a brand-new summary that can replace the original text by learning from tens of thousands of news reports, is the “transformability” of this generation process powerful enough to be regarded as a new and independent creation? We are entering a gray area of “digital labor”, where traditional legal frameworks find it difficult to make effective judgments. This uncertainty has panicked publishers: their core asset – content – is being copied and exploited in an intangible and cost-free way, yet they cannot find a clear legal weapon to defend their survival.
The revaluation of human creativity in the post-human era
The lawsuit filed by PMC also forces us to ponder: when AI can generate content on a large scale and efficiently, what exactly is the unique value of human creativity?
From the perspective of AI philosophy, the “creativity” of AI is a recombination and derivation based on statistical probability, rather than stemming from the unique subjective experience, emotional resonance and moral judgment of human beings. As researchers have demonstrated, AI can train high-performance models using open data from the public domain, which indicates that the innovative path of AI can bypass copyright barriers. But is this kind of innovation truly what humanity needs? AI can generate news summaries that perfectly conform to grammar and logic, but it cannot, like journalists, risk their lives on the battlefield to capture the truth, nor can it, like editors, instill a commitment to fairness and justice behind the news.
The final ruling of this lawsuit will not merely be about monetary compensation or traffic sharing; it will become a symbolic judgment. It will determine how we view the role of humans in the information society: Are we the feed producers of AI, or the creators and guardians of meaning? It will force us to redefine the “humanistic value” of journalism: the value of news lies not only in the information itself, but also in the human perspective, emotional warmth and moral responsibility behind it. Under the impact of AI, human journalists will be forced to transform from mere “information transporters” to “truth diggers” and “interpreters of meaning”. This might be a stern yet hopeful philosophical proposition brought to us by the AI era. This lawsuit is precisely a crucial vote that humanity is attempting to cast for its future at the crossroads of history.

AI news Media giants like Rolling Stone have sued Google