Ex-OpenAI board member speaks out - Weekly News Roundup - Issue #469
Plus: xAI's $6B Series B funding round; Elon's plans a "Gigafactory of compute"; ChatGPT Edu; Apple's Project Greymatter; Miss AI; a bioprocessor with 16 human brain organoids; and more!
Hello and welcome to Weekly News Roundup Issue #469. Another week, another drama around OpenAI. This time, former OpenAI board member and AI policy researcher, Helen Toner, shares her perspective on the events that led to removing Sam Altman as the CEO of OpenAI in November 2023.
In other news, xAI has closed a $6 billion Series B funding round and Elon Musk plans to build a “Gigafactory of compute”. Meanwhile, OpenAI follows Microsoft and Google in releasing a chatbot aimed at educational institutions, and Bloomberg leaks Apple’s Project Greymatter.
Over in robotics, Unitree Robotics is offering a small humanoid robot for $16,000, while their robot-dog with rifles strapped to its back trains with Chinese soldiers. Also, a Japanese robot has set a record for the fastest time to solve a Rubik’s Cube.
We will finish with a biocomputer made with 16 human brain organoids and insights into what we can learn from Estonia’s digital state.
I hope you enjoy this week’s issue!
When Sam Altman was briefly ousted as the CEO of OpenAI on November 17th, 2023, the reason given for that decision was that Altman was not “consistently candid in his communications.” Since we did not know more than that, questions started to be asked about what Altman did not tell OpenAI’s board of directors. What was he hiding from the board that resulted in his swift removal from his duties as CEO? Was it some kind of business deal Altman had done behind the board’s back? Or was it a breakthrough in AI, like the mysterious Q*, that Altman was hiding?
We did not have answers to these questions, until now. On Tuesday, May 28th, The TED AI Show published a conversation with Helen Toner, an AI policy expert and former board member at OpenAI. In that conversation, Toner shared her perspective on the events that led to Sam Altman’s firing.
According to Toner, Sam Altman lied and withheld information from OpenAI’s board, which led the board to lose trust in Altman’s leadership.
Altman did not inform the board that he owned the OpenAI Startup Fund while claiming he was an independent board member with no financial interest in the company. The OpenAI Startup Fund was launched in May 2021, and Sam Altman owned or controlled it until April 1st, 2024. During the Senate hearing on AI in May 2023, when asked if he made a lot of money, Sam Altman said he had no equity in OpenAI but failed to mention his involvement with OpenAI Startup Fund.
Additionally, Altman “on multiple occasions” shared inaccurate information about safety processes. According to Toner, he also withheld crucial information from the board. To illustrate how much in the dark OpenAI’s board was about what was going on in the company, Toner said that the board members learned about the launch of ChatGPT in November 2022 not from Altman but from Twitter. The board was not informed in advance.
The atmosphere inside OpenAI was not great. After publishing a paper she co-authored about AI policy, in which she seemed to be critical of OpenAI and more positive about Anthropic’s approach to AI policy and safety, Toner said she was personally targeted by Altman. Apparently, the paper angered him and Altman started to take actions to push Toner off the board.
After two OpenAI executives shared with the board their experiences of “psychological abuse” and gave evidence of Altman “lying and being manipulative in different situations,” Toner and other board members concluded that Altman was not fit to lead OpenAI and removed him as CEO on November 17th, 2023.
Toner also shared what was happening inside the company during those turbulent days. Apparently, the OpenAI employees were presented with a choice: either Altman is back, or OpenAI falls apart. Toner said there were more options available, but the employees were given a binary choice. Many people did not want to see OpenAI implode either because of financial incentives (like having shares in the company) or because they just liked being there. Others feared going against Altman.
Eventually, Sam Altman was reinstated as CEO of OpenAI. Shortly after his comeback, OpenAI announced a new board of directors. Helen Toner, Ilya Sutskever, and presumably others who voted against Altman were removed from the board. Sutskever remained at OpenAI, where he reportedly focused all his attention on the Superalignment team. On May 14th, 2024, OpenAI announced that Ilya Sutskever was leaving the company (I wrote more about that here). Shortly after the news about Sutskever broke, Jan Leike, one of the leaders of the Superalignment team, also announced his departure from OpenAI. Recently, Leike joined Anthropic to continue his work on superalignment.
Two of OpenAI’s current board members, Bret Taylor and Larry Summers, denounced Toner’s claims in an article published in The Economist. They also denounced claims made by Tasha McCauley, also a former OpenAI board member, who, together with Toner, claimed that OpenAI’s self-regulatory structure had failed and that public oversight over AI is needed to ensure AI would benefit “all of humanity.”
Meanwhile, after the disintegration of the Superalignment team, OpenAI announced the creation of a Safety and Security Committee, which will be responsible for making recommendations on critical safety and security decisions for all OpenAI projects. The Safety and Security Committee will be led by Sam Altman and board directors Bret Taylor, Adam D’Angelo and Nicole Seligman.
As all of this is happening, OpenAI revealed that the training of its next frontier model has already begun.
I highly recommend listening to the full conversation Helen Toner had on The TED AI Show. I have only recapped the main points, but she goes deeper into details on what it was like to work at OpenAI and with Sam Altman. The second part of the conversation is also interesting as it focuses on AI policy and why we need to regulate AI to minimize the chances of misusing this powerful technology to harm or misinform people.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
Bilingual AI Brain Implant Helps Stroke Survivor Converse in Both Spanish and English
Neuroscientists have developed a brain implant that helped a bilingual person who became paralysed due to a stroke and was unable to speak, communicate again. Unlike other brain implants, this one can detect when the patient thinks in English or Spanish and decode the message they want to say. According to researchers, their implant was able to distinguish between English and Spanish based on the first word with 88% accuracy, and they decoded the correct sentence with an accuracy of 75%. Additionally, the research provided more insight into how our brains process language, revealing that much of the activity for both Spanish and English comes from the same area of the brain.
🧠 Artificial Intelligence
Big tech has distracted world from existential risk of AI, says top scientist
Speaking with The Guardian at the AI Summit in Seoul, Max Tegmark, one of the leading voices in the AI safety community, said that Big Tech has succeeded in distracting the world from the existential risk to humanity that artificial intelligence still poses. Tegmark believes this shift is influenced by industry lobbying, similar to how the tobacco industry delayed smoking regulation. He argues that only government-imposed safety standards can ensure responsible AI development, as tech leaders may feel powerless to stop advancements due to competitive pressures.
x.AI - Series B Funding Round
x.AI, Elon Musk’s competitor to OpenAI, announced a $6 billion Series B funding round. The investment will be used to develop the company’s own AI models, bring xAI’s first products to market, build advanced infrastructure, and accelerate the research and development of future technologies.
Elon Musk plans xAI supercomputer
Elon Musk is planning to build, possibly together with Oracle, the “Gigafactory of Compute” by 2025 to power xAI’s models, according to a report published by The Information. In a presentation for investors, Musk revealed that the new supercomputer will use as many as 100,000 Nvidia H100 GPUs, making it at least four times larger than the largest existing GPU clusters.
AI darling Nvidia's market value surges closer to Apple
If I had to pick one winner of the generative AI boom, I’d pick Nvidia. Since the release of ChatGPT, Nvidia’s chips have been in high demand, propelling the company’s valuation to cross $1 trillion, then $2 trillion, and now getting closer to surpassing Apple in market valuation on its way to a $3 trillion valuation.
Introducing ChatGPT Edu
OpenAI announced ChatGPT Edu, a new service powered by GPT-4o, with features similar to ChatGPT Plus but tailored for universities to “responsibly deploy AI to students, faculty, researchers, and campus operations.” The release of ChatGPT Edu follows similar steps from Microsoft and Google, which have also released AI tools geared towards educational institutions.
Apple Bets That Its Giant User Base Will Help It Win in AI
We are a little over a week away from WWDC 2024, where Apple is rumoured to showcase a suite of new products and services infused with AI features. In his newsletter at Bloomberg, Mark Gurman shares a detailed leak of what we can expect to see from Apple at WWDC 2024. This includes Project Greymatter, a set of AI tools that Apple will integrate into core apps like Safari, Photos, and Notes.
Microsoft CEO Satya Nadella is reportedly worried about an OpenAI deal with Apple
The Information reports that Apple has signed a deal with OpenAI to use their large language models in Apple’s products and services. If true, this news will be a big win for Sam Altman and cement his role as OpenAI CEO. However, Satya Nadella, Microsoft’s CEO, is reportedly concerned about how this deal could affect Microsoft's product ambitions.
Nvidia’s rivals take aim at its software dominance
Nvidia chips and CUDA, Nvidia’s proprietary software for running programs on them, have become the industry standard. However, many companies are not happy with the current state of things and are developing Triton, an alternative GPU programming language. Supporters of the initiative, which include Intel, AMD, Qualcomm, as well as Meta, Microsoft, OpenAI, and Google, hope that Triton will make GPU programming and switching between different GPU manufacturers easier. This flexibility is especially attractive for AI companies that cannot afford to wait for Nvidia GPUs and could benefit from switching to competitors’ GPUs.
'Miss AI': World's first beauty contest with computer generated women
The Fanvue World AI Creator Awards has launched the world’s first AI beauty pageant contest. The participants, all AI-generated female models, will compete in three categories: appearance, use of AI tools, and social clout, for a share of $20,000. The judges will consist of two humans and two AI models. The co-founder of Fanvue said he hopes the contest will become “the Oscars of the AI creator economy.”
Sony: Declaration of AI Training Opt Out
Sony Music Group (SMG) has announced a firm stance on the use of its content for AI development. While SMG supports the integration of responsibly produced AI in music creation, it emphasizes the need to respect songwriters' and recording artists' rights. Therefore, SMG and its affiliates, Sony Music Publishing (SMP) and Sony Music Entertainment (SME), prohibit any unauthorized text or data mining, web scraping, or similar activities on their content for AI training or commercialization. With this declaration, Sony has effectively forbidden the use of the work of many artists in AI development.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
Unitree Robotics unveils G1 humanoid for $16k
Unitree, a Chinese robotics company, has made their G1 humanoid robot (or “agent,” as they call it) available for purchase for $16,000. For that price, you’ll get a 1.27m tall humanoid robot with impressive capabilities, as the company proudly presented in their promotional video. The G1 is all-electric, with batteries lasting for 2 hours on a single charge. Due to its size, G1 may not be the best choice for the workplace, but its affordable price and the optional Edu version position the robot as a good platform for research and education.
Meet the Chinese army’s latest weapon: the gun-toting dog
The Chinese army has unveiled its latest weapon: a robotic dog with an automatic rifle attached to its back. Alongside a similarly armed quadcopter drone, the robot dog participated in joint military drills with Cambodia. The robot is based on Unitree Robotics' Go2 robot dog. The company denied selling products to the Chinese military, and it is unclear how the army procured it.
OpenAI is restarting its robotics research group
After a three-year break, OpenAI is restarting its in-house robotics research team and is hiring robotics research engineers. The first robotics research team at OpenAI was closed in 2021 to focus on other research areas. However, since the release of ChatGPT, OpenAI has become more involved in robotics, notably by investing in Figure, one of the humanoid robotics companies promising to bring commercial humanoid robots to market in the next year or two. I’d also like to remind that building a household robot has been one of OpenAI’s technical goals.
▶️ How AI Will Step off the Screen and into the Real World | Daniela Rus (12:54)
In this TED Talk, Daniela Rus, a robotics and AI researcher from MIT CSAIL, explains how liquid networks work and how they could make robots smarter and more adaptable to changing conditions—something that traditional neural networks struggle with—while being smaller and easier to understand.
▶️ The fastest robot to solve a puzzle cube (0:48)
Engineers from Mitsubishi Electric have built TOKUFASTbot, a robot that can solve a Rubik’s Cube in 0.305 seconds, setting a new world record. For comparison, the human record for solving a Rubik’s Cube is 3.13 seconds.
🧬 Biotechnology
World's first bioprocessor uses 16 human brain organoids for ‘a million times less power’ consumption than a digital chip
A Swiss biocomputing startup, FinalSpark, has created a bioprocessor made from 16 human brain organoids (organoids are miniature, simplified versions of organs grown in vitro from stem cells, mimicking the organ's structure and function). The company hopes that other institutions will use the bioprocessor remotely via Neuroplatform for biocomputing research. Bioprocessors like the one FinalSpark is offering promise to use over 1 million times less energy compared to traditional chips and could help shape the next steps in AI computing.
💡Tangents
▶️ Estonia | The Digital State (22:59)
I highly recommend watching this video about how Estonia transformed itself from a post-Soviet republic into a digital society and what we can learn from Estonians. It explains the timeline of reforms Estonia enacted, starting from increasing tech literacy and gradually digitising all governmental activities and public services, leading to the creation of the world’s first digital and paperless society, which is also the most business-friendly country in the world. If you happen to live in Estonia, please share either in the comments or privately with me what interacting with Estonia’s digital services looks like in practice.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
I tend to place less value on who said something, and more value on the merits and evidence of what they say 🙂