- CO/AI
- Posts
- OpenAI Wins Copyright Suit š¢
OpenAI Wins Copyright Suit š¢
The content wars enter a new phase as a copyright lawsuit from publishers Raw Story and AlterNet is dismissed. āļø š°
Today in AI
In partnership with
News roundup
Todayās big story
Research spotlight
This week on the podcast
News roundup
The top stories in AI today.
NEW LAUNCHES
The latest features, products & partnerships in AI
Fast.ai and Answer.AI merge, unveil AI code education platform
CapCutās āCommerce Proā streamlines content creation for e-commerce
Googleās new āVidsā tool will create video presentations for you
Microsoft just updated a 38-year-old software with AI, and the results are amazing
Bagel Explores: training-time and inference-time techniques to advance AI reasoning (Sponsored)
GOVERNMENT
Press releases, regulation, defense & politics.
AI MODELS
Deployment, research, training & infrastructure
FrontierMath: How to determine advanced math capabilities in LLMs
How new AI models are compressing videos without reducing quality
Google may accelerate its Gemini 2 AI model release timeline
How Roboflow saved 74 years of developer time with Metaās SAM model
AI expert Bruce Schneier on why society needs āpublic AI modelsā
IMPLEMENTATION
Announcements, strategies & programs
Whatās happening in AI right now
The content wars enter a new phase
OpenAI's recent court victory, dismissing a copyright lawsuit from publishers Raw Story and AlterNet reveals an uncomfortable truth: we're using industrial-era legal frameworks to regulate information-age innovations. While publishers argue about content ownership, a big question is how AI is upending traditional notions of value creation.
The case at hand
The lawsuit centered on claims that OpenAI violated digital copyright protections by using publishers' content to train its models. Judge Colleen McMahon ruled that the publishers lacked legal standing to bring the claim, citing insufficient proof that ChatGPT was trained on their material or that such training caused harm.
Media economics breaking down?
The fundamental rules and frameworks that governed content businesses are unraveling. In the traditional model, value came from scarcity - creating and distributing unique content that others couldn't easily replicate. The marginal cost of producing quality content was high, and distribution channels were limited.
AI inverts these economics. Value emerges not from individual pieces of content but from the patterns discovered across vast datasets. The marginal cost of "understanding" new content approaches zero as once an AI model is trained, it can process and analyze new content at virtually no additional cost.
This explains why conventional damage claims struggle in court. How do you calculate harm when the value derived isn't from copying specific content but from discovering patterns across billions of data points? Traditional copyright law never contemplated this type of use.
Network effects at scale
The dynamics become even more interesting at scale. While traditional content businesses face diminishing returns as they grow, AI systems often exhibit opposite characteristics. More data generally leads to better performance which attracts more users, generating more data in a powerful flywheel effect.
Beyond simple patterns
What makes this transition particularly challenging is that we're not simply seeing a shift from one clear business model to another. Instead, we're entering a period where multiple new models may emerge and compete. Some organizations will focus on becoming AI infrastructure providers, others on specialized applications, and still others on curating and validating training data.
The most important question isn't who owns what content - it's who can best capture value in a world where understanding patterns across all content matters more than owning any specific piece.
We publish daily research, playbooks, and deep industry data breakdowns. Learn More Here
The Bagel team just published new research on two complementary methods for advancing AI reasoning: training-time and inference-time techniques.
Training-Time Enhancements involve refining model structures. Methods like Parameters Efficient Fine-Tuning (PEFT) optimize learning efficiency by targeting specific neural pathways within fixed frameworks, while approaches like WizardMathās 3-step reasoning improve structured reasoning.
Inference-Time Enhancements focus on on-the-fly thinking, using Chain of Thought prompting to activate logical processing without extra training. Techniques like Self-Consistency for validation and Program of Thought for coding tasks enable precise, stepwise reasoning.
The findings suggest combining both approaches: training-time builds foundational reasoning, while inference-time optimizes its application. Bagel Network supports this advancement through open-source infrastructure, empowering a community-driven AI evolution though monetizable open source AI.
About Bagel: Bagel is an AI & cryptography research lab, building the world's first monetization layer for open-source AI.
This week on the podcast
Canāt get enough of our newsletter? Check out our podcast Future-Proof.
In this episode, the hosts Anthony Batt and Shane Robinson with guest Joe Veroneau from Conveyor discuss outsmarting paperwork. Conveyor is a company that helps automate security reviews and document sharing between companies. They use AI technology, specifically language models, to automate the process of filling out security questionnaires. This saves customers a significant amount of time and improves the quality of their responses.
How'd you like today's issue?Have any feedback to help us improve? We'd love to hear it! |
Reply