2025年至2026年人工智能訴訟聚焦於以有著作權內容來訓練大型語言模型是否構成合理使用或侵害著作權,傳達了一個混雜、特定事實的判決—經常以轉換性利用為有利於人工智能公司之認定;然而仔細審酌市場之損害及無授權利用資料—無異搭建極具意義上訴的場景。
Key 2025-2026 Legal Trends & Rulings
- Training as "Fair Use": In Bartz v. Anthropic (June 2025) and Kadrey v. Meta, courts found that training LLMs on copyrighted works can be "transformative" and therefore fair use.
- Distinguishing Storage vs. Training: While training was protected, judges have penalized AI companies for storing large, unauthorized datasets, as seen in the Anthropic settlement where the company was held liable for holding pirated books.
- The "Market Harm" Argument: Courts are weighing whether AI-generated content acts as a substitute for human creators. Rulings emphasize that while AI may create competition, it does not necessarily violate copyright, though this is actively being challenged.
- Major Lawsuits: Key cases include authors suing OpenAI and Anthropic, news organizations suing Microsoft/OpenAI, and music labels accusing Anthropic of using copyrighted lyrics.
Core Legal Issues
- Input (Training Data): Whether AI companies need permission to scrape the web for training.
- Output (Generated Content): Whether AI outputs are "substantially similar" to training data and, thus, infringing.
- Human Authorship: The U.S. Copyright Office continues to deny protection for works created solely by AI, requiring human-authored elements to be present.
Outlook
The battle is shifting to higher courts, with publishers, news organizations, and music labels expected to appeal, aiming to prove that mass AI training destroys the market for original, human-created content.