Whispers of GPT-5 - Weekly News Roundup - Issue #452
Plus: Neuralink has implanted brain chip in a human; Amazon-iRobot $1.7B deal falls through; OpenAI and Microsoft in talk to invest into humanoid robot company; Gemini Pro is available globally
Welcome to Weekly News Roundup Issue #452. This week's main topic is the rumours surrounding OpenAI's newest model, the highly anticipated GPT-5. We'll discuss when it might be released and what new features the successor to GPT-4 could have.
In other news, Neuralink has successfully implanted its first brain chip in a human. Meanwhile, Amazon's bid to acquire iRobot has fallen through and Gemini Pro is now globally available within Google Bard. Additionally, OpenAI and Microsoft are in discussions to invest in humanoid robotics company, and more!
As we approach the first anniversary of the GPT-4 release to the public, rumours about its successor, GPT-5, have started to emerge. So, when can we expect GPT-5 to be released, and what new features is OpenAI planning for their latest model?
Based on the tweet from Jason Wei, a researcher at OpenAI, we can assume that OpenAI had already begun the full training of GPT-5 about a week ago. That’s the phase where OpenAI uses all available GPUs to train what will become GPT-5. In the case of GPT-4, this process took about three months. To be on the safer side, as unexpected things affecting the training may happen, we can assume GPT-5’s training phase will take about the same amount of time as its predecessor’s. After the training phase is completed, OpenAI will spend a couple of more months on safety research, risk assessment and fine-tuning. For GPT-4, this phase took six months.
Taking all of this together, it will take about nine months to have GPT-5 ready for public release, placing the release around November 2024. As AI Explained points out in his video, most likely we will see GPT-5 at the end of November, after the US elections, as OpenAI probably does not want to add more fuel to already very contested US presidential elections (generative AI is already being used in spreading political misinformation, as we have discussed that last week).
In a conversation with Bill Gates, when asked what would be the next key milestones in the next two years, Sam Altman responded first with multimodality - speech in and out, images, video - as those are the things people want. But the bigger improvement, Altman added later, will be around the reasoning capabilities of large language models. Right now, GPT-4’s reasoning abilities are quite limited, but that can be improved using clever prompting techniques. Altman gives an example of GPT-4 generating 10,000 different answers to a question and then selecting the best one as its final answer. Interestingly, this is the approach researchers from Google DeepMind took in creating AlphaCode 2, which essentially does what Altman said. AlphaCode 2 generates a massive number of possible answers which then are evaluated and the best solution is presented as the final answer. With this approach, AlphaCode 2 has beaten 87% of some of the best human programmers whereas base GPT-4 scored in the lower 5%.
Another approach to improving the reasoning capabilities of large language models is to use techniques such as Chain of Thoughts or one of its variants, like Tree of Thoughts. These methods essentially boil down to asking the model to lay out its reasoning process step-by-step before attempting to answer the question. By allowing the model to take time and “think” about the problem, researchers improved the quality of answers without the need to retrain the model.
If you are interested in learning more about introducing the reasoning capabilities in AI models, I have written an article diving deeper into this topic and how this could be the next breakthrough in AI.
However, there are drawbacks to the methods I described above. The biggest is that they require more processing power, which translates to higher operating costs. They are also slower as it takes more time to process thousands of possible answers. It will be interesting to see how OpenAI will approach this problem. One possible option is to find a way to optimise the required computations. A brute force approach of throwing more computing power at the problem is an option, too. I can also see OpenAI introducing a new, higher and more expensive subscription plan that grants access to GPT-5 with deep reasoning capabilities.
GPT-5 is not the only highly anticipated large language model to be released this year. We are still waiting for Google to release the Gemini Ultra, the most capable model from the Gemini family of large language models. Meanwhile, Meta began training Llama 3 and we can expect Anthropic to release Claude 3 sometime this year. There is also the open-source community which can produce something interesting, too. 2024 shapes to be another year full of breakthrough AI models that will surprise us in one way or another.
I want to give credit to AI Explained, whose video on this topic was the inspiration for this article. I recommend watching his video as he dives deeper into what GPT-5 can be and gives more context about the evidence we discussed here.
If you enjoy this post, please click the ❤️ button or share it.
I warmly welcome all new subscribers to the newsletter this week. I’m happy to have you here and I hope you’ll enjoy my work. A heartfelt thank you goes to everyone who joined as paid subscribers this week.
The best way to support the Humanity Redefined newsletter is by becoming a paid subscriber.
If you enjoy and find value in my writing, please hit the like button and share your thoughts in the comments. Additionally, please consider sharing this newsletter with others who might also find it valuable.
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
Elon Musk says Neuralink has implanted first brain chip in a human
Elon Musk revealed that the first Neuralink brain-computer interface (BCI) has been successfully implanted in a human. “Initial results show promising neuron spike detection,” Musk said in a post on X. Nothing more is known about the experiment. Neuralink received approval from the FDA in April 2023 to launch clinical trials on humans and in September of the same year opened recruitment for the trials.
🧠 Artificial Intelligence
Bard’s latest updates: Access Gemini Pro globally and generate images
Gemini Pro, Google’s equivalent of OpenAI’s GPT-3.5 that powers the free version of ChatGPT, is now available globally with Bard. The full list of where Bard is available and which languages it supports is here.
OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects
The Biden administration is preparing to use the Defense Production Act to compel tech companies to inform the government about their advanced AI projects. The new requirement will give the US government access to key information about some of the most sensitive projects inside OpenAI, Google, Amazon, and other tech companies competing in AI when they train an AI model using a significant amount of computing power. Companies will also have to provide information on safety testing being done on their new AI creations.
Elon Musk's AI startup is reportedly looking to raise up to $6 billion
Elon Musk is reportedly in talks to raise up to $6 billion for his OpenAI competitor, xAI, which is a big step up from $1 billion xAI had been seeking for last month. In November last year, xAI released Grok, a ChatGPT competitor, available for X Premium+ subscribers. Musk's AI lab is proposed to be valued at $20 billion, positioning it roughly on par with Anthropic in terms of valuation.
Google Splits Up Its Responsible AI Team
Google’s Responsible AI team, which reviews Google’s AI products for compliance with the company's rules for responsible AI, is undergoing massive changes following the departure of its leader, Jen Gennai, early this year. According to Wired, the team has been split and its future is uncertain. A portion of the team is being transferred to the trust and safety division, while the rest remains in place.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
iRobot and Amazon call it quits, terminate acquisition agreement
Amazon has ended its bid to acquire iRobot, the maker of Roomba robotic vacuums. The $1.7 billion acquisition collapsed after EU regulators blocked the deal, saying it could restrict competition in the robot vacuum cleaner market. As a result of the acquisition falling through, Amazon will pay a $94 million termination fee to iRobot. Following this, iRobot announced it would be cutting 31% of its workforce (350 employees), alongside the departure of its chief executive.
Microsoft and OpenAI are in talks to inject $500 million into humanoid robotics startup Figure AI
Microsoft and OpenAI are reportedly in talks to invest $500 million into Figure AI, one of the leading humanoid robotics companies. If the deal was made, it could value Figure AI at $1.9 billion, potentially making it the first humanoid robotics unicorn. Recently, Figure’s robots started trials at BMW’s factory in South Carolina.
This robot can tidy a room without any help
A new robotic system called OK-Robot could train robots to pick up and move objects in settings they haven’t encountered before. It’s an approach that might be able to plug the gap between rapidly improving AI models and actual robot capabilities, as it doesn’t require any additional costly, complex training.
Scientists use robot dinosaur in effort to explain origins of birds’ plumage
What can you do to study dinosaurs if there are no living dinosaurs? You can build a dinorobot. Meet Robopteryx, a robot resembling a small dinosaur, used by a team from South Korea to study and test their ideas about the origins of birds' wings and tails.
Watch a robot with living muscles walk through water
Researchers from Japan have created a tiny robot that moves by contracting lab-grown muscle tissue in its legs. Controlled by electricity, the biorobot can move forward, pivot, and make sharp turns. Standing at 3 cm tall, this robot won't be breaking any speed records soon, as it moves at a pace of just 5.4 millimetres per minute and requires over a minute to turn 90 degrees. The next step for the team is to add joints which together with more and thicker muscles could make the robot move faster.
Drone the size of a bread slice may allow Japan closer look inside damaged Fukushima nuclear plant
The reactors inside the tsunami-hit Fukushima Daiichi nuclear power plant are still too dangerous for humans to enter so TEPCO, the plant’s operator, is sending robots instead. At a recent press conference, TEPCO unveiled two new robots - a drone and a snake-like robot - which will join the menagerie of exotic robots which help monitor and clean up the damaged reactors.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!