• CO/AI
  • Posts
  • 120 Million to Challenge Nvidia’s Dominance 💾

120 Million to Challenge Nvidia’s Dominance 💾

The New Chip War is Here 💾

No hype. No doom.

Just actionable resources and strategies to accelerate your success in the age of AI 

Today’s Story

A team of Harvard dropouts has raised $120 million to challenge Nvidia’s dominance in AI chips with their startup, Etched.

The demand for Nvidia GPUs, or graphics processing units, that are highly effective at training AI models due to their parallel processing abilities, has skyrocketed. Nvidia has seen demand so unprecedented that they are now one of the top three most valuable companies in the world.

Etched is challenging their existing process with the bet that as AI advances, computing needs will be met by customized, hard-wired chips called ASICs, which are more efficient than Nvidia’s general-purpose GPUs.

Etched’s Sohu chip is designed specifically for “transformers,” the core architecture behind AI models like ChatGPT, offering more than 10 times the speed of Nvidia’s GPUs due to its single-use architecture.

Essentially, Etched is bringing purpose-built chip architecture for AI to enhance speed and reliability most likely lowering training costs.

Headlines

  • AWS’s AI Ambitions: Encircling and Dominating (COAI) - The AWS leviathan is aiming to dominate the AI market with a comprehensive ecosystem spanning infrastructure, models, and application development tools.

  • Elon Musk’s Grok Chatbot May Integrate Midjourney? (COAI) - The leading image generation software may become the engine behind Grok’s very own image capabilities. Recent evidence in Grok’s source code hints at the likelihood of this integration.

  • UAE Allies with US in AI Race (COAI) - The UAE’s strategic alignment with the US was recently highlighted by Microsoft’s $1.5 billion investment in the Abu Dhabi-based AI group G42, a deal perhaps motivated by the Biden administration’s desire to limit China’s influence in the region.

Research

Google Researches AI Reasoning

Researchers at Google’s Brain Team are exploring chain-of-thought prompting for large language models and demonstrating their effectiveness in improving complex reasoning tasks.

The team shows that models like PaLM can achieve state-of-the-art performance on benchmarks by providing intermediate reasoning steps. In other words, prompting language models work by breaking down a problem or task into smaller, manageable parts leading to the final answer.

Tool Spotlight

Reply

or to participate.