OpenAI Eyes a $150 Billion Valuation - Sync #484
Plus: 60 countries endorsed a “blueprint” for military AI; Reflection 70B drama; "cyborg worms" controlled by AI; the feasibility of data centres in space; and more!
Hello and welcome to Sync #484!
The tech world is talking about OpenAI’s latest model, o1. I am working on an article taking a closer look at o1 and its implications for AI development, which will be out soon. However, that wasn't the only news surrounding OpenAI this week. Before o1 was released, reports surfaced that OpenAI is in talks to raise $6.5 billion and is seriously considering building its own chips with TSMC, and that is what we will take a closer look in this week’s issue of Sync.
Elsewhere in AI, sixty countries have endorsed a “blueprint” for the use of AI in the military. We also explore the fable of Reflection 70B and the lessons the AI community should take from it, while Jon Y from Asianometry asks where the AI boom is heading.
Over in robotics, Amazon acquired a startup focused on solving the complex problem of picking and handling a wide range of physical objects, DeepMind teaches robots to perform complex tasks requiring dexterity, and the former head of Alphabet’s AI-powered robotics moonshot shares his experience with building robots for everyday life.
We will close this week’s issue with plants mining metals from the soil, day zero antivirals, and a bioengineered hangover cure.
Enjoy!
OpenAI Eyes a $150 Billion Valuation
OpenAI just dropped their latest model, o1, and the entire tech world is talking about it, examining what it can do and what it means for future developments in artificial intelligence. However, that was not the only news surrounding OpenAI this week.
The big news before o1 was released were the reports of OpenAI being in discussion for a $6.5 billion funding round, which would raise the company’s valuation to $150 billion.
The list of potential investors includes Microsoft, Nvidia, and Apple. Each of these tech giants is, in one way or another, already invested in OpenAI’s success. Microsoft previously invested over $10 billion into OpenAI and is OpenAI’s largest technological and business partner. Nvidia provides the hardware needed to train and run OpenAI’s models. Jensen Huang, CEO of Nvidia, even personally delivered the first DGX H200 to OpenAI. Apple, meanwhile, is a relatively new partner. The partnership between Apple and OpenAI was the subject of many rumours surrounding Apple’s plans to introduce large language models and generative AI to their products. Eventually, Apple chose to build its own AI models for Apple Intelligence but opened the option to use third-party models for more complex queries, with ChatGPT being the first to be supported.
Recent reports suggest OpenAI desperately needs a cash injection. The Information reported in July that OpenAI is on track to lose over $5 billion this year and could run out of cash by the end of the year. For comparison, Anthropic expects to burn “only” $2.7 billion in 2024.
Developing AI models is an expensive business. In February 2023, SemiAnalysis reported that it costs $694,444 per day to operate ChatGPT in computer hardware costs alone. When you add operational, staffing, and R&D costs, the figures quickly escalate to millions and billions.
If the funding round is successful, it would buy OpenAI more time to find a sustainable business model. OpenAI generates about $2 billion in annual revenue and expects to double that in 2025, according to the Financial Times, with most of the revenue expected to come from business customers looking to implement OpenAI’s models in their operations.
In addition to talking with investors, OpenAI is also in talks with banks to secure an additional $5 billion in credit lines, reports Bloomberg.
But there are some conditions
However, there will be reportedly some strings attached to this $6.5 billion deal. As Reuters reports, some investors request a change to OpenAI’s corporate structure and remove a profit cap for investors.
OpenAI is known for its rather unusual corporate structure in which a non-profit company overseen by OpenAI board controls the capped profit company developing AI models like GPT or the new o1. This structure is a result of how OpenAI began. When OpenAI was founded in 2015, it was founded as a non-profit company. However, in 2019, in order to attract more investors, OpenAI became a capped-profit company. The company explains that this structure allows OpenAI to accept investments and be more like a traditional tech startup while at the same time being bounded by the original mission to “building safe and beneficial artificial general intelligence for the benefit of humanity.”
Existing investors are bound by a capped limit on their return on investment, with any additional returns directed to the non-profit.
However, OpenAI in 2024 is a very different company than it was in 2019. Over the past two years, the company has experienced massive growth, evolving from a small AI lab into a leader in AI development. It kickstarted the current AI boom with the release of ChatGPT in November 2022 and was at the centre of conversations about AI. OpenAI became the fastest company to reach 100 million users at the time and now reports having 200 million active users.
Investors see how much OpenAI has grown and hope for even greater growth, especially as AGI is no longer seen as a distant fantasy but a real possibility. However, the profit cap hinders the potential for generating massive returns on their investments.
It remains to be seen whether OpenAI will change its corporate structure. The non-profit, and later capped-profit, structure was established to ensure that OpenAI focuses on developing safe and beneficial AGI, rather than maximising profits for shareholders. While the removal of the profit cap would benefit investors, it would also move OpenAI further from its original mission as an open AI research lab.
OpenAI’s custom AI chips
One of the top priorities for OpenAI will be finding ways to reduce costs and avoid burning through its cash reserves. One potential solution could be designing and using its own AI chips. This is something other big companies, such as Google with their TPUs, Amazon with Graviton, and Tesla, to name a few, have already done.
Using custom AI chips is something OpenAI has been exploring since last year. In fact, Sam Altman, OpenAI's CEO, was in talks in the Middle East to raise funds for the AI chip project shortly before he was ousted as CEO last year.
Recent reports suggest that OpenAI is moving forward with its own AI chip project, which will be produced by TSMC using the cutting-edge 1.6 nm "A16" process node. OpenAI would join many other tech companies that have reached a point where designing and using custom-made chips tailored to their specific needs makes more sense than relying on products from Nvidia, AMD, or Intel.
Going with custom AI chips presents several advantages. First is a reduced dependency on hardware manufacturers. Since the beginning of the AI boom, there has been fierce competition to acquire Nvidia’s latest AI chips, which has massively inflated the prices of GPUs. As one of the biggest players in the AI scene, OpenAI might enjoy preferential treatment from Nvidia and be among the first to receive the newest GPUs.
Bringing chip design in-house also opens the possibility of building chips specifically designed to run OpenAI’s models. Unlike off-the-shelf chips like Nvidia’s H200, which need to be general enough to handle various workloads, OpenAI’s custom chips can be designed and optimised specifically for its models. This would result in gains in computing speed and power efficiency, ultimately lowering operating costs—something OpenAI would greatly benefit from.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
The First Person to Receive an Eye and Face Transplant Is Recovering Well
In May 2023, Aaron James, disfigured by a severe accident in 2021 that caused the loss of much of the left side of his face and his left eye, became the first person to undergo a combined whole-eye and partial-face transplant. Now, over a year later, James is recovering well, and although the transplanted eye has not restored vision, it has maintained its shape and blood flow, with the retina showing an electrical response to light. The article also includes an interview with James, who shares what the recovery process looked like and how the surgery changed his life.
🧠 Artificial Intelligence
Few have tried OpenAI’s Google killer. Here’s what they think.
The first reviews of SearchGPT, OpenAI’s chatbot-like search engine, are out, and the results are mixed. It apparently shows some promise but lacks Google’s specialised functions and can suffer from hallucinations. At this point, SearchGPT does not appear ready to replace Google Search.
The fable of Reflection 70B
Last week, a new open model named Reflection 70B was released, claiming to be "the world’s top open-source model." However, after closer inspection by the community, cracks began to show in this narrative. This article by
▶️ Where is the AI Boom Going? (16:30)
In this video, Jon Y from Asianometry reflects on the current state of the AI landscape, both from the hardware and software sides. He presents many interesting ideas and insights, such as the comparison between today’s OpenAI and IBM in the 1980s when the latter was introducing PCs, and discusses which AI applications are actually useful and profitable. According to Jon’s analysis, much depends on the development of the next generation of foundation models, and the next two years of scaling will be crucial for the future of AI.
Sixty countries endorse 'blueprint' for AI use in military; China opts out
At a summit in Seoul, around 60 countries, including the US, endorsed a non-binding "blueprint for action" to regulate the responsible use of AI in military settings, but China and about 30 other nations did not support it. The summit focused on more actionable steps, such as risk assessments and maintaining human control over AI in military operations, particularly in nuclear contexts. The blueprint also emphasized preventing AI from facilitating the proliferation of weapons of mass destruction.
CERN for AI is the EU’s ‘last chance’ to gain on foreign competition
Europe lags behind the US and China in AI, but a proposed "CERN for AI" could help Europe close this gap and even become a leader in AI innovation. Modelled after how CERN transformed particle physics research in Europe, this initiative aims to centralize AI efforts, attract top talent, and enhance infrastructure. Although still in its early stages, it could be a crucial opportunity for Europe to gain a competitive edge in AI.
Canva says its AI features are worth the 300 percent price increase
Recently, Canva announced an entire suite of new generative AI tools coming to their popular all-in-one design platform. However, these new tools will come at a price—three times the price, to be exact. Some Canva users in the US report that their Teams subscription has increased from $120 per year for up to five users to $500 per year. Canva says the increase is justified due to the “expanded product experience” and the value that generative AI tools have added to the platform. Customers are, unsurprisingly, not happy with these changes.
Scientists Make ‘Cyborg Worms’ with a Brain Guided by AI
Scientists have created "cyborg worms" controlled by an AI that tracks the worms' location and guides them toward a food source by targeting specific neurons in their bodies using light to control their movement. This research demonstrates the potential for AI and biological systems to collaborate, with possible medical applications like improving deep-brain stimulation for treating conditions such as Parkinson's disease.
The /llms.txt file
When many people politely asked large language models not to use their websites’ content, others decided to help by proposing the llms.txt file format. The goal of this proposed file is to provide concise, LLM-readable information on websites, helping large language models access essential content. However, it is worth stressing that this structured information is not intended for training—it is meant to be used by AI assistants when accessing the websites.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
This Could Be the Start of Amazon’s Next Robot Revolution
Another AI startup got absorbed by a tech giant. This time, Amazon made a deal with Covariant, an AI startup focused on solving the complex problem of picking and handling a wide range of physical objects. As this article argues, the collaboration echoes Amazon’s acquisition of Kiva Systems, which revolutionised warehouse operations with mobile robots. It may take years to implement Covariant’s solutions fully, but this move could dramatically reduce the number of human workers required in Amazon’s warehouses, bringing the idea of fully automated warehouses closer to reality.
DeepMind: Our latest advances in robot dexterity
Researchers from DeepMind shared their latest results on teaching robots to perform complex tasks requiring high dexterity. They presented two projects: ALOHA Unleashed and DemoStart. ALOHA Unleashed successfully taught a two-arm robot to perform complex manipulation tasks such as tying shoelaces, hanging shirts, repairing robots, inserting gears, and even cleaning a kitchen. DemoStart demonstrated a method for training multi-fingered robotic hands using simulations, requiring far fewer examples than traditional methods. These breakthroughs aim to improve robots' ability to interact with real-world objects, bringing us closer to highly capable, dexterous robots for everyday tasks.
Inside Google’s 7-Year Mission to Give AI a Robot Body
In this article, Hans Peter Brondmo shares what he has learned from leading Google X’s robotics division for over seven years and the challenges the team faced in making robots useful in daily life outside factories and warehouses. He shares his opinions on the current, seemingly booming, robotics industry, saying that the industry is focusing on the wrong thing. He also expresses his concerns about applying Silicon Valley’s style of building companies, with its focus on “minimum viable products”, combined with VCs’ general aversion to investing in hardware and lack of the patience required to solve complex problems with robotics.
🧬 Biotechnology
How plants could mine metals from the soil
Phytomining is a method where plants absorb and store metals, such as nickel, in their tissues, offering a more sustainable alternative to traditional mining. The U.S. Department of Energy recently allocated $9.9 million to fund seven projects aimed at improving this process, focusing on increasing yields through plant selection, genetic modification, and sustainable practices. While promising, phytomining still faces challenges in terms of efficiency and environmental impact, such as emissions from burning plants to extract metals. Researchers are working to scale the process and reduce these drawbacks.
Day Zero Antivirals for Future Pandemics
The Coalition for Epidemic Preparedness Innovations aims to reduce vaccine development to 100 days to respond to future pandemics. For context, vaccines for COVID-19 have been developed in just 326 days, which is very fast for vaccine development. A promising approach to hit the 100-day target is to mimic the innate immune system, which uses natural defences like physical barriers (e.g., mucus) and antiviral molecules. Researchers are exploring various methods to create antivirals that target fundamental viral properties and offer early, preventive protection against future pandemics.
Silicon Valley’s new wedding perk: A bio-engineered hangover cure
Apparently, bottles of ZBiotics, a genetically engineered hangover-prevention drink, are becoming popular at weddings in the Bay Area, with some brides spending over $1,000 to provide it for guests. While some users report positive effects, including reduced hangover symptoms, others remain sceptical about its efficacy. Despite mixed reviews, ZBiotics continues to grow in popularity within the wedding scene.
💡Tangents
▶️ Does It Make Sense To Put Data Centers In Space? Can They Really Cost Less To Operate? (16:36)
The computing power needed to train next-generation AI models is skyrocketing, and with it, the energy demand is also increasing. Many companies are trying to solve this problem, and one of them is Lumen Orbit, which proposes to place AI data centres in Earth’s orbit. In this video, Scott Manley dissects Lumen’s plans and highlights the engineering challenges the startup would need to solve to make their vision of training AI in space a reality, from selecting the right orbit to addressing cooling issues.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"