What was announced at OpenAI Dev Day - Weekly Roundup - Issue #440
This week - xAI releases the Grok chatbot; China aims for mass deployment of humanoid robots by 2025; a yeast strain with 50% synthetic DNA developed; Amazon works on a massive LLM; and more!
On November 6th, 2023, OpenAI hosted its inaugural Dev Day. During the event, which was aimed primarily at developers building AI apps, OpenAI presented new updates and tools available to developers and hinted at a path towards AI agents and, eventually, Artificial General Intelligence (AGI).
Sam Altman, the CEO of OpenAI, started his opening keynote by looking back at what OpenAI has achieved in almost a year since releasing ChatGPT and how this “small research project” grew into one of the most capable multimodal AI assistants available. Altman shared that 2 million developers build AI apps with OpenAI API and over 92% of Fortune 500 companies use in some way or form products built with OpenAI API. Altman also revealed that ChatGPT is used by 100 million users every week, a number that was achieved purely through word-of-mouth.
GPT-4 Turbo, new models and updates to ChatGPT
The first big announcement was the release of GPT-4 Turbo. This updated and more capable version of GPT-4 has now a 128k token context window, which is equivalent to 300 pages of a standard book. It is also more accurate over longer contexts, meaning it should be better at remembering longer conversations and better at understanding longer inputs.
With GPT-4 Turbo, developers can now bring knowledge from outside documents or databases into the model. Also, GPT-4 Turbo’s knowledge cutoff is now set to April 2023 (previously, the model was aware of events up to September 2021). Sam Altman said that the goal is to eliminate knowledge cutoff dates and to have OpenAI models be completely up-to-date with the events in the world.
Developers should be happy to hear that GPT-4 Turbo can now return responses as a JSON (no more prompt engineering hacks required) and is better at function calling (OpenAI API’s ability to detect when to call a custom function provided by the developer). OpenAI is also introducing reproducible output to allow models to return consistent outputs, and other things to make debugging model responses easier.
Retrieval Augmented Generation (RAG) is now native. Files and documents can be uploaded to GPT-4 Turbo, where they will be automatically chunked, embedded and retrieved, greatly simplifying work for developers building custom bots.
Many developers will be happy to hear that the rate limits are going up and the prices are going down. GPT-4 Turbo input tokens are 3x cheaper than GPT-4 and output tokens are 2x cheaper. The prices for GPT-3.5 Turbo and fine-tuned GPT-3.5 Turbo are also going down (for more details on the new pricing, check the announcement post).
DALL-E 3, GPT-4 vith Vision and text-to-speech models with six voices are now available through the OpenAI API. Whisper, OpenAI’s open-source speech recognition model, also got a new release and GPT-3.5 Turbo now supports a 16K context window by default.
OpenAI announced new experimental features. One of them is GPT-4 fine-tuning, which is now in an experimental access program. The second one is custom models, which go one step further than fine-tuning. With custom models, OpenAI is giving selected organizations an opportunity to work with a dedicated group of OpenAI researchers to train custom GPT-4 to their specific domain
An interesting announcement was the Copyright Shield. OpenAI commits to step in and defend their customers, and pay the costs incurred, in case they face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and the OpenAI developer platform.
OpenAI also announced new changes to ChatGPT, which now uses GPT-4 Turbo under the hood to process prompts for ChatGPT Plus users. Apart from that, ChatGPT can now browse the web, write code and generate images thanks to the integration with DALL-E 3, OpenAI’s text-to-image generator. ChatGPT’s user interface also got a coat of fresh paint with the new UI.
Further details on these updates are available on the OpenAI website.
Assistants API
Assistants API is an interesting addition to what the OpenAI API offers. Assistants API streamlines building custom AI assistants by simplifying some aspects of building a custom chatbot and offloading others. With Assistants API, it is now much easier to manage all conversations users can have with the bot and to provide unique features with the help of function calling. Assistants built with this new API can also have access to Code Interpreter, allowing them to write and execute Python code when needed.
GPTs, GPT Store
The second big announcement was the introduction of GPTs. They can be described as small, tailored versions of ChatGPT, built for specific purposes. Each GPT can have its own unique instructions, knowledge base and actions. To get a glimpse of what GPT agents can bring to the table, I recommend watching a demo in which a GPT agent connects to a calendar, summarises the schedule and then sends a Slack message - all done from the ChatGPT interface.
GPT are very easy to build and you don’t even need to write a single line of code. During the keynote, Sam Altman showed how easy it is to build a custom GPT agent just by talking to GPT Builder. Once the agent is complete, it can be shared with others or remain private. OpenAI promises to launch GPT Store - a marketplace for GPT agents, similar to Apple’s AppStore or Google Play - later this month. OpenAI also promises to share revenue with the creators of the most popular agents.
More information on GPTs is available on OpenAI's website.
A path towards agents and AGI
GPTs and Assistants API hint at where OpenAI is heading, towards more powerful and more capable agents and, eventually, AGI. Sam Altman said that the best way to safely introduce AGI into society is through gradual, iterative releases. This approach, he says, allows people to see what is possible with these new, powerful tools before making the next step.
According to OpenAI, the next stop on the path to AGI are agents, highly capable bots that can plan and execute complex tasks. The open-source community has been playing with the idea of building AI agents since March and soon, OpenAI will give this functionality to everyone.
OpenAI and the entire AI industry made tremendous progress in the last year. I’m reminded of what Greg Brockman, co-founder of OpenAI, tweeted in February: “Most amazing fact about AI is that even though it’s starting to feel impressive, a year from now we’ll look back fondly on the AI that exists today as quaint & antiquated.” When Sam Altman was closing his opening keynote and inviting people to come next year, he echoed Brockman’s words. It’s going to be interesting to see what the second OpenAI Dev Day will look like.
If you enjoy this post, please click the ❤️ button or share it.
I warmly welcome all new subscribers to the newsletter this week. I’m happy to have you here and I hope you’ll enjoy my work. A heartfelt thank you goes to the three individuals who joined as paid subscribers this week.
The best way to support the Humanity Redefined newsletter is by becoming a paid subscriber.
If you enjoy and find value in my writing, please hit the like button and share your thoughts in the comments. Additionally, please consider sharing this newsletter with others who might also find it valuable.
You can also buy me a coffee if you enjoy my work and want to support it.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
Spinal Implant Helps a Man With Severe Parkinson’s Walk With Ease Again
Thanks to an experimental spinal cord implant, Marc Gauthier was able to walk confidently and independently, a significant improvement after living with Parkinson's for three decades. The implant works by mimicking natural brain signals to muscles, offering a new approach to treating movement disorders originating in the brain. While this success is promising, the implant was tuned to work specifically for Gauthier. Tests on more patients are required to assess the efficacy of the treatment that could potentially transform the management of Parkinson's disease.
🧠 Artificial Intelligence
xAI Grok
xAI, an AI lab founded by Elon Musk in response to OpenAI in March of this year, has released its first chatbot, named Grok. According to the release statement, the model took 2 months to train and already shows superior performance compared to GPT-3.5 (the model that is used in the free version of ChatGPT) but worse than Google’s PaLM-2, Athrophic’s Claude 2 and OpenAI’s GPT-4. Grok is currently available to a limited number of users in the United States and will ultimately be released to subscribers to X’s top-tier subscription service, Premium+. As the announcement says, “Grok is designed to answer questions with a bit of wit and has a rebellious streak” and sample responses shared by Musk can confirm that.
Kai-Fu Lee’s 01.AI releases bilingual LLM
In late March, Kai-Fu Lee, former VP at Google and founder of Sinovation Ventures, founded 01.AI with the aim of creating the OpenAI of China. The company, valued at $1 billion, has recently released its first model, Yi-34B, an open-source bilingual (English and Chinese) model trained with 34 billion parameters. This model is smaller than others like Falcon-180B and Meta LlaMa2-70B but shows promising performance. According to tests done by the AI community platform Hugging Face, Yi-34B ranked first among pre-trained LLMs in several metrics. Lee's strategy includes both open-sourcing some models and developing proprietary models for commercial products.
Amazon dedicates team to train ambitious AI model codenamed 'Olympus'
While tech giants like Google, Microsoft, and Meta have been actively diving into generative AI, Amazon has remained relatively quiet. However, this is set to change, according to a recent Reuters report. Amazon is reportedly investing millions in developing an ambitious large language model, with the hope of rivalling leading models from OpenAI and Alphabet. Codenamed "Olympus," this model reportedly consists of 2 trillion parameters, making it one of the largest models. Amazon believes that having its own advanced models could make its offerings more attractive on AWS, particularly among enterprise clients seeking access to top-performing models. There is no specific timeline for the release of this new model.
How Europe is racing to resolve its AI sovereignty woes
Establishing a strong AI industry is increasingly crucial, both economically and geopolitically. Europe is actively fostering its local AI industry to maintain AI sovereignty. Leading this charge are Germany's Aleph Alpha and France's Mistral AI, which have raised €460 million ($490 million) and €133 million ($142 million) respectively, in their efforts to develop advanced generative AI models. These investments underscore the geopolitical imperative for Europe to secure a presence in the global AI landscape. This trend is further supported by broader European government initiatives, such as the Netherlands' development of its own homegrown large language model (LLM), aimed at bolstering domestic AI innovation and ensuring the region's technological competitiveness on the world stage.
AI Company Plans to Run Clusters of 10,000 Nvidia H100 GPUs in International Waters
As AI regulations around the world start to take shape, some AI companies may not be so pleased with the idea of reporting to the government what their AI is doing. In response to this, Del Complex is exploring a unique solution: establishing a floating data centre in international waters offering a cluster of 10,000 Nvidia H100 GPUs to train and run AI models without any government oversight. That’s assuming the company is real, which this article doubts.
🤖 Robotics
China says humanoid robots are new engine of growth, pushes for mass production by 2025 and world leadership by 2027
The Chinese Ministry of Industry and Information Technology published a nine-page guideline on humanoid robots, saying that China’s humanoid robots should “realise mass production by 2025”. The ministry aims for China to “establish a humanoid robot innovation system, make breakthroughs in several key technologies and ensure the safe and effective supply of core components” by 2025. By 2027, humanoid robots should “become an important new engine of economic growth” in China. This ambitious timeline aligns with the plans of various companies currently developing humanoid robots (we took a closer look at 10 of them in this article).
Humanoid robots are here, but they're a little awkward. Do we really need them?
Humanoid robots, once only seen in science fiction, are coming and the first wave of commercial humanoid robots is planned to arrive around 2025. "In 10, 20 years, you're going to see these robots everywhere," Agility Robotics co-founder and CEO Damion Shelton said. "Forever more, human-centric robots like that [Agility Robotics’ Digit] are going to be part of human life. So that's pretty exciting." This article explains the broader challenges and goals in humanoid robot development, such as enhancing dexterity and intelligence. It also addresses societal implications, including labour shortages and public acceptance.
▶️ The Evolution of Stretch | Boston Dynamics (8:59)
In this video, Boston Dynamics shares the journey of developing Stretch, their innovative warehouse robot designed for package handling. The story begins with an initial prototype which used parts from their humanoid robot, Atlas. Following early tests in warehouse environments, the Boston Dynamics team refined the concept which led to the Stretch we know now - a solution that effectively meets the demands of real-world warehouse operations and is now ready for deployment.
🧬 Biotechnology
Engineered yeast breaks new record: a genome with over 50% synthetic DNA
Biologists have created a strain of yeast with over 50% synthetic DNA, a key milestone for the Synthetic Yeast Genome Project (Sc2.0), which aims to develop yeast with a fully synthetic genome. This new strain of brewer’s yeast (Saccharomyces cerevisiae) has 7.5 artificial chromosomes. The research, spanning 15 years, has broader implications beyond brewing, including potential applications in drug and fuel production. The project also advances bioengineering by allowing scientists to re-engineer entire chromosomes and explore possibilities beyond those found in nature.
I tried lab-grown chicken at a Michelin-starred restaurant
In this article, Casey Crownhart of MIT Technology Review shares her experience of trying lab-grown meat. “The flavor was savory, a bit smoky from the burnt chili aioli. It was tough to discern, with all the seasonings, sauces, and greens on top, but I thought I detected a somewhat chicken-ish flavor too,” she writes. She also notes that the texture of the meat wasn’t quite right: it was somewhat softer than real meat and resembled plant-based meat alternatives. The restaurant Crownhart visited is one of only two in the US that currently serve lab-grown meat to customers. In June, the US Department of Agriculture permitted the sale of lab-grown meat in the US.
Thanks for reading. If you enjoy this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!