Ilya Sutskever leaves OpenAI - Weekly News Roundup - Issue #467
Plus: Apple is close to using ChatGPT; Microsoft builds its own LLM; China is sending a humanoid robot to space; lab-grown meat is on shelves but there is a catch; hybrid mouse/rat brains; and more!
Hello and welcome to Weekly News Roundup Issue #467. What a week…
On Monday, OpenAI held its Spring Update event and unveiled GPT-4o, its most advanced and free model yet. I've analyzed GPT-4o in depth in a separate article here.
Then, on Tuesday, Google announced a suite of new AI models and services during Google I/O. There's a lot to unpack there, and I’m planning to share my thoughts on Google's announcements over the weekend.
And finally, on Wednesday, Ilya Sutskever announced he is leaving OpenAI, marking the end of an era in OpenAI’s history. We’ll focus on that in this week’s news roundup and discuss what Sutskever’s and others' departures mean for OpenAI.
In other news, Apple is reportedly close to signing a deal with OpenAI to put ChatGPT on their devices, and Microsoft is building its own large language model to compete with GPT-4, Claude 3, and Gemini. In robotics, China is sending a humanoid robot to their space station, and the US Air Force says that the recently tested AI pilot is on track to be as good as any human pilot.
Enjoy!
It’s been a big week for OpenAI. The company has successfully launched GPT-4o, its first natively multimodal language model with demos showing human-like conversations with the model making rounds around the Internet. However, a couple of days after the Spring Update, OpenAI announced changes in its leadership. Ilya Sutskever, one of the founders of OpenAI and its long-time chief scientist, has left the company, symbolically closing a chapter in OpenAI’s history.
Ilya Sutskever is one of the most impactful AI researchers in the world. According to his Google Scholar profile, his work has been cited in over half a million other papers and his name is attached to some of the most important papers in AI research. In 2012, Sutskever, together with Geoffrey Hinton and Alex Krizhevsky, published the legendary ImageNet Classification with Deep Convolutional Neural Networks paper that kickstarted the deep learning revolution of the 2010s. He is also listed as one of the authors of AlphaGo. During his time at Google Brain, he contributed to TensorFlow, a popular deep learning framework, and worked on sequence-to-sequence models.
In 2015, Sutskever left Google to become a co-founder of OpenAI and to serve as its chief scientist from day one. He is listed as the author of or contributor to some of the most important research papers from OpenAI, including OpenAI Five, DALL·E and DALL·E 2, and all GPT models, from the very first GPT model to the recently announced GPT-4o.
However, since the release of ChatGPT and its surprising and explosive growth, tensions began to form inside OpenAI. According to anonymous sources from inside OpenAI, the company was divided into two groups. On one side, there was a group that advocated for frequent releases and putting advanced AI models into the hands of as many people as possible, with Sam Altman becoming the key figure in this group. On the other side, there was a group promoting AI safety over frequent releases and wanting to preserve OpenAI’s original mission. Ilya Sutskever became the focal point for the second group.
That tension reached a breaking point at the end of November 2023 when the pro-safety group convinced OpenAI’s board of directors to relieve Sam Altman from his position as the CEO of OpenAI. However, the coup did not last long, and Altman was reinstated as the CEO. As a result, Sutskever lost his seat on the board of directors but remained in his role as Chief Scientist. I have covered the schism at OpenAI in detail in these three articles.
Since the events of November 2023, Sutskever has remained quiet until the news of his departure from OpenAI broke. According to the post on OpenAI and Sutskever himself, both parties parted ways on good terms. Jakub Pachocki has been named the new Chief Scientist at OpenAI. Interestingly, Pachocki was one of the people who joined Altman in his short exile from the company.
However, Sutskever is not the only person who recently left OpenAI. Shortly after the news about Sutskever broke, Jan Leike also announced his departure from OpenAI. Jan Leike was an AI safety researcher at OpenAI. Together with Sutskever, he co-led the Superaligment team, whose goal was to ensure that a superintelligent AI system would follow human intent. Leike cites the change in OpenAI’s culture as the reason for his departure, saying that “over the past years, safety culture and processes have taken a backseat to shiny products”. Altman promised to address Leike’s comments in the next couple of days.
Sutskever’s and Leike’s departures fall into a broader trend of AI safety researchers leaving OpenAI in recent months. Earlier this year, two other people working on safety and governance left OpenAI. One of them wrote on his profile on LessWrong that he quit OpenAI "due to losing confidence that it would behave responsibly around the time of AGI."
Sutskever’s coup to restore the company to its original values failed, and now, with his departure, it looks like one chapter of OpenAI history has been closed.
Since the release of ChatGPT, OpenAI has transformed itself from an AI research lab into an $86 billion company leading the generative AI revolution. OpenAI and Sam Altman have become synonymous with artificial intelligence. When OpenAI releases a new product, it is discussed everywhere. However, along the way, the company has changed.
GPT-4o is the first release of the new OpenAI. It redefined what “open” in OpenAI means. “Open” in OpenAI no longer means open research, open weights or open source. “Open” in OpenAI now means giving access to state-of-the-art AI models to as many people as possible. In this new OpenAI, it seems there is no place for people like Ilya Sutskever, Jan Leike and others who call to stop and think before moving forward.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🧠 Artificial Intelligence
Apple Nears Deal With OpenAI to Put ChatGPT on iPhone
Bloomberg reports that Apple is close to signing an agreement with OpenAI to use ChatGPT on its devices. If this rumour comes true, OpenAI models could be powering Apple’s AI features and services in the upcoming iOS 18, which is expected to introduce a suite of new AI tools to Apple devices during the WWDC event on June 10th. Previous rumours suggested that Apple was in talks with Google to use Gemini, but these discussions did not lead to an agreement, according to Bloomberg.
New Microsoft AI model may challenge GPT-4 and Google Gemini
According to a report published by The Information, Microsoft is working on an in-house large language model to challenge OpenAI’s GPT-4, Anthropic’s Claude 3, and Google’s Gemini. The development of the new model, codenamed MAI-1, is led by Mustafa Suleyman, co-founder of DeepMind and Inflection, who joined Microsoft after the tech giant essentially absorbed Inflection. The news of Microsoft working on a competitor to GPT-4 may sound surprising, given the close relationship between Microsoft and OpenAI, and the billions of dollars Microsoft has invested in OpenAI. However, this seems to be part of Microsoft’s plan to reduce reliance on OpenAI, just in case.
UK engineering firm Arup falls victim to £20m deepfake scam
A British engineering company has fallen victim to a deepfake scam, where criminals posed as senior company officers through a deepfake video call and stole £20 million. I hope this story will raise awareness about these kinds of attacks among businesses and individuals, too.
EU warns Microsoft it could be fined billions over missing GenAI risk info
The European Union has warned Microsoft of potential fines of up to 1% of its global annual revenue under the Digital Services Act (DSA) for failing to respond fully to a request for information about its generative AI tools. This request, made in March, sought details on systemic risks posed by AI features in Bing, including "Copilot in Bing" and "Image Creator by Designer." The EU is particularly concerned about these tools' impact on civic discourse and electoral processes. Microsoft has until May 27 to comply or risk enforcement actions, including periodic penalties of up to 5% of daily income.
Falcon 2: UAE’s Technology Innovation Institute Releases New AI Model Series, Outperforming Meta’s New Llama 3
The UAE-based Technology Innovation Institute (TII) has released the second generation of its Falcon open models. Falcon 2 comes in two versions: 11B and 11B VLM. The Vision-to-Language Model, or VLM, is TII’s first multimodal model that can operate on both text and image inputs. According to TII, Falcon 2 11B surpasses Meta’s Llama 3 8B and is on par with Google’s Gemma 7B model.
CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers
CoreWeave, a GPU cloud provider worth $19 billion, is opening a European office in London and plans to open two data centres in the UK this year as part of a £1 billion investment. Founded in 2017, CoreWeave provides on-demand access to Nvidia GPUs, which are crucial for AI development. Amidst a surge in AI interest, the company raised $1.1 billion, nearly tripling its previous valuation.
Eric Schmidt: Why America needs an Apollo program for the age of AI
Eric Schmidt, former CEO of Google, calls for increased US investment in AI infrastructure to maintain the nation's dominance in the field. In this article, Schmidt assesses that the US needs a national compute strategy, akin to an Apollo program for the age of AI. Schmidt’s plan calls for more dedicated government AI supercomputers and the expansion of federal AI research infrastructure. This includes a hybrid model combining federal and commercial cloud computing. Additionally, a national compute strategy should include a talent strategy, ensuring the recruitment and retention of top global AI talent.
So What If My AI Bot Wrote This Paper!?
Modern state-of-the-art AI models require an enormous amount of data, be it text, images, or videos, for training. The teams building these models take that data from the internet and, in the process, willingly or not, violate copyright laws on a massive scale. In this article,
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
Autonomous F-16 Fighters Are ‘Roughly Even’ With Human Pilots Said Air Force Chief
Recently, the US Air Force successfully tested an AI-controlled F-16 fighter jet against human pilots in simulated dogfights. According to an assessment from Air Force Secretary Frank Kendall, the AI pilot is on track to be as good as any human pilot. He also stated that the AI tested at the beginning of May is roughly on par with senior pilots (those with 2,000-3,000 flight hours). Though Kendall made it clear that AI-controlled aircraft aren’t ready to be deployed yet, he noted that very good progress is being made.
The New Shadow Hand Can Take a Beating
Shadow Robot Company has released a new, three-fingered version of their advanced Shadow Hand, which the company calls the "New Shadow Hand". The new robotic hand was designed with robotics research in mind, meaning it can withstand a serious amount of misuse and beating. This was one of the requirements from the robotics and AI labs, which now use reinforcement learning algorithms to train their robots, and these training methods necessitate a robust robot that does not break that easily after hundreds or thousands of trials and errors.
Solar-powered robot astronaut could soon be heading to China’s space station
China’s Tiangong Space Station will soon get a new crew member. However, this new crew member is not a human but a humanoid robot. This robot, weighing roughly 55 pounds (25 kg) and standing around 5 feet 5 inches (approximately 1.6 metres) tall, is designed to assist with routine tasks, support the human crew, and operate in microgravity. Taikobot will join a small club of humanoid robots that have gone into space, which currently includes NASA's Robonaut-2 and Russia's Skybot F-850.
▶️ How I Built the NEW World's Fastest Drone (15:05)
Two months ago, Max Verstappen raced in his Red Bull F1 car against the fastest drone in the world, capable of reaching speeds of over 300 km/h. In this video, Luke Maximo Bell, the creator of that drone, explains how he built the drone that went on to become the fastest in the world, reaching speeds of over 480 km/h. I admire Luke's creativity and resourcefulness in building this drone.
▶️ Cafe Robot: Integrated AI Skillset Based on Large Language Models (1:31)
Thanks to large language models, these two robots can operate a café and take orders in natural language, which are then translated into a set of actions that result in a cup of coffee and a sliced cake.
🧬 Biotechnology
Lab-Grown Meat Is on Shelves Now. But There’s a Catch
Huber's Butchery in Singapore has become the first store to sell lab-grown meat directly to customers. However, this lab-grown meat will contain only 3% animal cells, with the remainder made up of plant protein. This article explores the challenges and potential of the cultivated meat industry, and addresses scepticism about whether such products will satisfy consumers and meet investors’ expectations.
New AI generates CRISPR proteins unlike any seen in nature
A bioinformatics startup Profluent developed an AI platform that can generate millions of never-seen-before CRISPR-like proteins and successfully used one of its AI-designed CRISPR systems, OpenCRISPR-1, to edit human DNA. OpenCRISPR-1 is open source and freely available for ethical research or commercial use.
What hybrid mouse/rat brains are showing us about the mind
Scientists have discovered that brains are more adaptable than previously thought by creating hybrid brains. They injected rat brain cells into mice brains lacking the sense of smell and found that the rat cells successfully integrated and partially restored the mice's sense of smell. Researchers hope their discovery could one day lead to the creation of various hybrid brains, including those with human neurons, to help us better understand brain function and to test new drugs and brain technologies.
💡Tangents
▶️ Nvidia buying Intel, RTX 5000 Greed, AMD Zen 5 Strix, TSMC 2nm (1:47:15)
Here is a very insightful conversation Tom from Moore's Law Is Dead had with Daniel Nenni, a semiconductor industry veteran with over 40 years of experience, about the current state and the future of Intel, AMD, Nvidia, TSMC, and more. It is almost a two-hour-long conversation, but if you are interested in semiconductors, it is worth the time.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
Ilya Sutskever's departure has caused a significant decline in OpenAI's safety research.
I am wondering how often this happens in many organizations (even whether A(G)I focussed ir not), today, when different strategic decisions, and ensuing actions - are prioritized over time.
To prioritize products/services/saas/models to capture market share for “bandwith” (the shortages in chips or datacentres and technical expertise) is likely to be the goal of many A(G)I focused businesses today.
To quote, “struggling for compute…” …“getting ready for next generation of models on security, monitoring, preparedness, safety, adversarial robustness, super(alignment), confidentiality, societal impact and related topics.” assumes models preceed preparedness for safety, super(alignment) etc etc.
“Bandwith balance” is tough to fingerpoint, when the (multi-)modality space is evolving so fast.
Heavy is the crown any CEO of an (A(G)I) profit-centric business wears.
However, should such a “security, monitoring, preparedness, safety, adversarial robustness, super(alignment), confidentiality, societal impact and related topics” A(G)I centric business evolve, then this focus alone would act as a counter-balance on one “bandwith” and perhaps “profit or bottom line” focussed. Such an entity would separately need to be a non-profit, and funded in such a way to understand the bandwith requirements to act as a necessary counter-balance. That would have to be its focus. Both cannot exist together in one space - or at least it will be challenging to do so.
https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence