This AI newsletter is all you need #74
What happened this week in AI by Louie
This week, all attention was on the whirlwind series of events at OpenAI, which unfortunately eclipsed several interesting new model releases. Safe to say you have followed the twists and turns of the drama, so we won’t cover here in detail, but in summary, OpenAI’s board fired its CEO, Sam Altman, on Friday with no prior warning to key staff or stakeholders. The board justified its actions, saying Sam “was not consistently candid in his communications with the board.” Still, even today, they have not given a clear reason to OpenAI employees, executives, or Microsoft. As things stand, 747 out of 770 OpenAI staff have signed a joint letter to OpenAI’s board stating they may quit and follow Sam Altman and Greg Brockman to join a new AI team at Microsoft unless the board resigns and reinstates Sam and Greg. This letter was also signed by co-founder Ilya Sutskever, who now regrets participating in the board’s actions.
Perhaps strangest in all of this is the silence from the OpenAI board and the failure to explain their actions, which leaves their motives very unclear. Even more incredibly, the board apparently stated to OpenAI executives that allowing the company to be destroyed “would be consistent with the mission.”
Suppose the board’s initial decisions were led by AI safety concerns, tension over monetization projects and investor profit sharing, and Sam’s leadership and board communication — at this stage. In that case, we think it would be clear to them that allowing OpenAI to collapse is not helping their cause relative to backing down and allowing OpenAI to survive in its current form. Suppose OpenAI staff leave en masse to join Microsoft (which doesn’t have the same AGI safeguards). In that case, it also means transferring all of OpenAI’s future potential profits (those beyond the profit cap set to accrue to the OpenAI charity) to a corporate at Microsoft. Sam would also still be in charge of the team and would have no OpenAI board to communicate with at all! Slowing down OpenAI also gives other competitors such as Google or Meta a chance to catch up, or even other countries such as China. All where the OpenAI board would have less power to control the future of AI.
Why should you care?
The significance of these events can range from trivial Silicon Valley politics to a six to eighteen-month setback in AI progress and even a pivotal moment in Earth’s future. This is all depending on your views of;
What roadblocks will we hit in the current trajectory of LLM progress, and how globally impactful will LLM technology really be?
Is AGI really a risk, and should building safeguards for this be a priority at this stage?
How long would it take to rebuild OpenAI’s GPT4.5/5 model pipeline within Microsoft, including 1) the RLHF dataset and ongoing pipeline from ChatGPT’s 100 million weekly active users, 2) Pre-training data set, and 3) training software infrastructure?
How far behind were Google, Anthropic, Meta, xAi, and others from releasing a GPT-4.5/5 level model?
Could this cause enough disruption to allow China to take the lead from the US and impact geopolitics?
Will AI be a winner in most markets where who gets there first really makes all the difference?
Were OpenAI’s charity status and AGI safeguards really set up to benefit humanity and redistribute wealth to all (after passing the several hundred $bn profit cap) better than a public company like Microsoft?
The leadership direction, organizational structure, culture, and location of the AI leader could well be globally significant, but we will have to wait and see how things play out!
- Louie Peters — Towards AI Co-founder and CEO
Hottest News
1. A Timeline of Sam Altman’s Firing From OpenAI — And the Fallout
Sam Altman has stepped down as the CEO of OpenAI, leading to the resignation of the company’s president and co-founder, Greg Brockman, and three senior OpenAI researchers. The situation is rapidly evolving, and this article provides a timeline to help you follow the unfolding events.
Paris-based AI research lab Kyutai secured $330 million in funding to advance the development of artificial general intelligence. With these resources, Kyutai plans to conduct comprehensive research led by PhD students, postdocs, and researchers. Additionally, the lab prioritizes transparency in AI by openly sharing its models, source code, and data.
3. DeepMind Introduces Lyria, a Model for Music Generation
Google DeepMind’s AI music model, Lyria, is transforming the music creation process by producing exceptional quality music with customizable vocals. The ‘Dream Track’ experiment on YouTube enables artists to connect with fans through AI-generated voice and music, while AI tools enhance the creative journey for professionals in the music industry.
4. GraphCast: AI Model for Faster and More Accurate Global Weather Forecasting
DeepMind has developed GraphCast, an advanced AI system that uses Graph Neural Networks to accurately and quickly predict global weather for up to 10 days in just a minute. It outperforms the industry-standard HRES system, can track cyclones and atmospheric rivers, and can identify extreme temperatures.
5. Nvidia Unveils H200, Its Newest High-End Chip for Training AI Models
Nvidia introduces the H200 GPU, an upgraded version with 141GB of high-bandwidth memory. This enhancement helps with the inference process in training large AI models. Set to be released in Q2 2024, the H200 will compete with AMD’s MI300X GPU, offering increased memory capacity for handling big models.
Do you think firing Sam Altman is the right move for Open AI? Share your thoughts in the comments below!
Five 5-minute reads/videos to keep you learning
1. Applying OpenAI’s RAG Strategies
OpenAI’s RAG model incorporates various retrieval strategies: cosine similarity, multi-query, step-back prompting, Rewrite-Retrieve-Read, and efficient routing. This article expands on each method used in Open AI’s series of RAG experiments and shows how to implement each.
2. Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation)
This article explores the practical steps for fine-tuning Low-Rank Adaptation (LoRA) in Language Models (LLMs), providing insights and recommendations. Experiments demonstrate consistent results with LoRA, saving memory usage but increasing runtime. Using LoRA on all layers, adjusting rank and alpha can improve model performance.
3. Start and Improve Your LLM Skills in 2023
This is a complete guide to starting and improving your LLM skills in 2023 without an advanced background in the field and staying up-to-date with the latest news and state-of-the-art techniques. It is intended for anyone with a small programming and machine learning background.
4. Here Is How Far We Are to Achieving AGI, According to DeepMind
A team of scientists from Google DeepMind has proposed a new framework for classifying the capabilities and behavior of AGI systems and their precursors. This article explores the framework, including criteria for measuring artificial intelligence, a matrix measuring performance and generality, and another matrix measuring autonomy and risk.
5. OpenAI’s Identity Crisis and the Battle for AI’s Future
This is a well-articulated commentary on the recent series of events happening at Open AI. According to the author, the question of the balance of AI safety vs. market momentum was a factor in the decision to oust Sam Altman.
Repositories & Tools
Ragas is a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. RAG denotes a class of LLM applications that use external data to augment the LLM’s context.
This app converts a screenshot to HTML/Tailwind CSS. It uses GPT-4 Vision to generate the code and DALL-E 3 to generate similar-looking images.
Netmind Power is a decentralized machine learning and AI platform. You can train their own models on the platform, and they will find the compute from their network and distribute the code for you.
GPT Crawler lets you provide a site URL, which it will crawl and use as the knowledge base for the GPT. You can either share this GPT or integrate it as a custom assistant into your sites and apps.
Top Papers of The Week!
1. Comparing Humans, GPT-4, and GPT-4V On Abstraction and Reasoning Tasks
This study compares GPT-4 and its multimodal version, GPT-4V, with humans on abstraction and reasoning tasks using the ConceptARC benchmark. Results show neither GPT-4 version matches human-level abstract reasoning, even with detailed one-shot prompts and simplified image tasks
2. GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation
The paper presents MM-Navigator, a GPT-4V-based agent that successfully performs zero-shot GUI navigation on smartphones using large multimodal models. It demonstrates great accuracy in understanding and executing iOS screen instructions.
3. A Survey on Language Models for Code
This comprehensive survey explores the evolution and advancements in code processing using language models. It covers over 50 models, 30 evaluation tasks, and 500 related works, focusing on general language and specialized models trained on code. The survey is open and updated on the GitHub repository.
4. Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
Retrieval-augmented language models (RALMs) enhance language models’ capabilities but can generate misguided responses due to unreliable retrieved information. A new approach, Chain-of-Noting (CoN), generates sequential reading notes to evaluate document relevance and improve RALM responses.
Quick Links
1. Meta announced Emu Video and Emu Edit, their latest AI image editing and video generation breakthroughs. EMU was announced in September, and today, it’s being used in production, powering experiences such as Meta AI’s Imagine feature.
2. NVIDIA announced it has supercharged the world’s leading AI computing platform by introducing the NVIDIA HGX™ H200. The platform features the NVIDIA H200 Tensor Core GPU with advanced memory.
3. IBM furthers its commitment to climate action through new sustainability projects and free training in green and technology skills for vulnerable communities.
Who’s Hiring in AI!
Sr. Backend Software Engineer (Billing Engineer) @Philo (Remote)
Decision Scientist — Data and Analytics @Salesforce (Remote)
Senior Software Engineer — DGX Cloud Messaging platform @NVIDIA (US/Remote)
AI & ML Engineer (Python, PyTorch, Tensorflow) @CodeLink (Ho Chi Minh City, Vietnam)
AI/ML Engineer @Intersog (Remote)
Technical Product Manager @CodeHunter (Remote)
Data Science Associate @Black Swan Data, Inc. (Remote)
Interested in sharing a job opportunity here? Contact sponsors@towardsai.net.
If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!
This AI newsletter is all you need #74 was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.