Sitemap - 2024 - Towards AI Newsletter

TAI #132: Deepseek v3–10x+ Improvement in Both Training and Inference Cost for Frontier LLMs

Transform Your Career in AI - 20% Off for the Holidays!

TAI 131: OpenAI's o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling

TAI 130: DeepMind Responds to OpenAI With Gemini Flash 2.0 and Veo 2

TAI 129: Huge Week for Gen AI With o1, Sora, Gemini-1206, Genie 2, ChatGPT Pro and More!

TAI #128: Scaled Self-Driving Arriving via Different Routes? Waymo vs FSD v13

TAI #127; DeepSeek releases R1-Lite - the first reasoning model competitor to OpenAI’s o1

TAI #126; New Gemini, Pixtral, and Qwen 2.5 model updates; Towards AI's From Beginner to LLM Developer course!

Why We Will Need Millions of LLM Developers? Launching Towards AI’s New One-Stop Conversion Course

#125: Training Compute Scaling Saturating As Orion, Gemini 2.0, Grok 3, and Llama 4 Approach?

TAI #124; Search GPT, Coding Assistant adoption, Towards AI Academy launch, and more!

TAI #123; Strong Upgrade to Anthropic’s Sonnet and Haiku 3.5, but Where’s Opus?

TAI #122; LLMs for Enterprise Tasks; Agent Builders or Fully Custom Pipelines?

#121: Is This the Beginning of AI Starting To Sweep the Nobel Prizes?

TAI #120; OpenAI DevDay in Focus!

#119 New LLM audio capabilities with NotebookLM and ChatGPT Advanced Voice

#118: Open source LLMs progress with Qwen 2.5 and Pixtral 12B

#117:Do OpenAI's o1 Models Unlock a Full "Moore's Law" Feedback Loop for LLM Inference Tokens?

TAI #116; Rise of the Protein Foundation Model; Comparing AlphaProteo, Chai-1, HelixFold3, and AlphaFold-3.

#115: LLM Adoption Taking Off? OpenAI API Use Up 2x in 5 Weeks, LLama at 350m Downloads.

#114: Two Paths to Small LMs? Synthetic Data (Phi 3.5) vs Pruning & Distillation (Llama-3.1-Minitron)

#113; Sakana's AI Scientist - Are LLM Agents Ready To Assist AI Research?

TAI #112; Agent Capabilities Advancing; METR Eval and Inference Compute Scaling

TAI #111; What Does Deepseek's 10x Cheaper Reused LLM Input Tokens Unlock?

TAI #110; Llama 3.1’s scaling laws vs 100k+ H100 clusters?

#109: Cost and Capability Leaders Switching Places With GPT-4o Mini and LLama 3.1?

#108: Conflicting Developments in the AI Regulation Debate

#107:What do enterprise customers need from LLMs?

TAI #106: Gemma 2 and new LLM benchmarks

TAI #105: Claude Sonnet 3.5; price alone is progress.

TAI #104; LLM progress beyond transformers with Samba?

Towards AI #103: Apple integrates GenAI

Towards AI newsletter #102: GenAI advances beginning to benefit weather forecasting?

This AI newsletter is all you need #101

This AI newsletter is all you need #100

This AI newsletter is all you need #99

This AI newsletter is all you need #98

This AI newsletter is all you need #97

This AI newsletter is all you need #96

This AI newsletter is all you need #95

This AI newsletter is all you need #94

This AI newsletter is all you need #93

This AI newsletter is all you need #92

This AI newsletter is all you need #91

This AI newsletter is all you need #90

This AI newsletter is all you need #89

This AI newsletter is all you need #88

This AI newsletter is all you need #87

This AI newsletter is all you need #86

This AI newsletter is all you need #85

This AI newsletter is all you need #84

This AI newsletter is all you need #83

This AI newsletter is all you need #82

This AI newsletter is all you need #81

This AI newsletter is all you need #80