Today in AI
In partnership with
News roundup
Today’s big story
Research spotlight
This week on the podcast
News roundup
The top stories in AI today.
NEW LAUNCHES
The latest features, products & partnerships in AI
GOVERNMENT
Press releases, regulation, defense & politics.
AI MODELS
Deployment, research, training & infrastructure
IMPLEMENTATION
Announcements, strategies & programs
What’s happening in AI right now
The content wars enter a new phase
OpenAI's recent court victory, dismissing a copyright lawsuit from publishers Raw Story and AlterNet reveals an uncomfortable truth: we're using industrial-era legal frameworks to regulate information-age innovations. While publishers argue about content ownership, a big question is how AI is upending traditional notions of value creation.
The case at hand
The lawsuit centered on claims that OpenAI violated digital copyright protections by using publishers' content to train its models. Judge Colleen McMahon ruled that the publishers lacked legal standing to bring the claim, citing insufficient proof that ChatGPT was trained on their material or that such training caused harm.
Media economics breaking down?
The fundamental rules and frameworks that governed content businesses are unraveling. In the traditional model, value came from scarcity - creating and distributing unique content that others couldn't easily replicate. The marginal cost of producing quality content was high, and distribution channels were limited.
AI inverts these economics. Value emerges not from individual pieces of content but from the patterns discovered across vast datasets. The marginal cost of "understanding" new content approaches zero as once an AI model is trained, it can process and analyze new content at virtually no additional cost.
This explains why conventional damage claims struggle in court. How do you calculate harm when the value derived isn't from copying specific content but from discovering patterns across billions of data points? Traditional copyright law never contemplated this type of use.
Network effects at scale
The dynamics become even more interesting at scale. While traditional content businesses face diminishing returns as they grow, AI systems often exhibit opposite characteristics. More data generally leads to better performance which attracts more users, generating more data in a powerful flywheel effect.
Beyond simple patterns
What makes this transition particularly challenging is that we're not simply seeing a shift from one clear business model to another. Instead, we're entering a period where multiple new models may emerge and compete. Some organizations will focus on becoming AI infrastructure providers, others on specialized applications, and still others on curating and validating training data.
The most important question isn't who owns what content - it's who can best capture value in a world where understanding patterns across all content matters more than owning any specific piece.
We publish daily research, playbooks, and deep industry data breakdowns. Learn More Here
The Bagel team just published new research on two complementary methods for advancing AI reasoning: training-time and inference-time techniques.
Training-Time Enhancements involve refining model structures. Methods like Parameters Efficient Fine-Tuning (PEFT) optimize learning efficiency by targeting specific neural pathways within fixed frameworks, while approaches like WizardMath’s 3-step reasoning improve structured reasoning.
Inference-Time Enhancements focus on on-the-fly thinking, using Chain of Thought prompting to activate logical processing without extra training. Techniques like Self-Consistency for validation and Program of Thought for coding tasks enable precise, stepwise reasoning.
The findings suggest combining both approaches: training-time builds foundational reasoning, while inference-time optimizes its application. Bagel Network supports this advancement through open-source infrastructure, empowering a community-driven AI evolution though monetizable open source AI.
About Bagel: Bagel is an AI & cryptography research lab, building the world's first monetization layer for open-source AI.
This week on the podcast
Can’t get enough of our newsletter? Check out our podcast Future-Proof.
In this episode, the hosts Anthony Batt and Shane Robinson with guest Joe Veroneau from Conveyor discuss outsmarting paperwork. Conveyor is a company that helps automate security reviews and document sharing between companies. They use AI technology, specifically language models, to automate the process of filling out security questionnaires. This saves customers a significant amount of time and improves the quality of their responses.