Whispers of Gemini and GPT-5 - H+ Weekly - Issue #426
This week - OpenAI announces Frontier Model Forum; Waymo shuts down its trucking division; how old humans can get; AI will write 80% of code “sooner than later”; and more!
There isn't much information out there about Gemini, Google's next-generation foundation model, but from what was revealed during Google I/O and what has been said in interviews, we can make an educated guess about what Gemini might look like.
The existence of Gemini was announced during the Google I/O conference this year. Sundar Pichai, the CEO of Google, revealed the combined forces of Google Brain and DeepMind (now working together as Google DeepMind) are working on Gemini.
According to Pichai, Gemini was built from the ground up to be multimodal and highly efficient in tooling and API integration. Although not explicitly stated, we can reasonably expect Gemini to also support multiple languages. Similar to PaLM 2, Gemini will be offered in various sizes and capacities. PaLM 2 comes in four sizes (codenamed Gecko, Otter, Bison, and Unicorn), with the smallest size being able to fit into a smartphone and operate without requiring internet access. It was also said that Gemini will enable future innovations, such as memory and planning.
As of May 2023, Gemini was still in the training process. After the training phase is complete, it will need to go through fine-tuning and safety evaluations (for context, OpenAI spent six months fine-tuning GPT-4). There is no official release date. However, according to The Wall Street Journal, Demis Hassabis, who oversees the project, said during a companywide meeting that Gemini would become available later this year.
So, what can we expect from Gemini? I think Gemini will not be a huge, monolithic large language model with trillions of parameters. Instead, I expect it to be a collection of interconnected AIs with one central AI to orchestrate them. That is something that Demis Hassabis described in an interview with The Verge - a central AI that dispatches tasks to specialised AIs as needed. “I actually think that probably is going to be the next era”, said Hassabis as he described this modular architecture. We also know that this is how most likely GPT-4 looks like under the hood. Google DeepMind will just take it to the next level.
As for the planning part, I initially thought that meant Gemini will have functionalities similar to what the open-source community has done with AutoGPT and other autonomous GPT agents. I still believe Gemini will possess some AutoGPT-like capabilities, but they will likely be optimized to work within Google's ecosystem. It wouldn't be surprising if some Google services like Docs, Search, or Gmail have their own specialized AIs trained specifically for use within those services. Then the frontend AI, which people will interact with, will be solely responsible for figuring out which service to call and presenting the results. This would align with Pichai’s statement about Gemini being “highly efficient in tooling and API integration”.
This modular architecture, with many smaller specialised LLMs working together, makes sense from the product and engineering point of view. When Google released Bard, Google highlighted how much its services are integrated into the chatbot. This strategy keeps users within the Google ecosystem, which is what Google wants. Again, what Hassabis said in the interview with The Verge supports the hypothesis of Gemini being more product focused. “I actually think there’s a really neat feedback loop now between products and research where they can effectively help each other”, said Hassabis in the interview.
From an engineering perspective, this approach also makes sense. It will be easier to manage and optimize a smaller, highly specialized AI than one large monolithic AI. Smaller models are also cheaper to train and will require fewer resources to run.
Meanwhile, people on the internet noticed that OpenAI has filed a trademark for GPT-5. This has ignited discussions about the potential release of GPT-4's successor. However, applying for a trademark does not necessarily means the existence of a functional product. Companies often register trademarks or patents for ideas that are still in development, either to gain a competitive advantage or to safeguard their intellectual property. In April this year, Sam Altman mentioned that OpenAI is not currently in the process of training GPT-5. “We are not and won’t for some time”, he said.
From H+ Weekly
My article on How AI Is Reshaping Hollywood has been published on
. It is an expanded version of what I have written in Issue #424 where I go into more detail about how AI is being used in the movie industry and how it can bring forth a new wave of creativity in cinema. I hope you’ll enjoy it!Becoming a paid subscriber now would be the best way to support the newsletter.
If you enjoy and find value in what I write about, feel free to hit the like button and share your thoughts in the comments. Share the newsletter with someone who will enjoy it, too. That will help the newsletter grow and reach more people.
🦾 More than a human
This Prosthetic Limb Actually Attaches to the Wearer’s Nerves
In 2020, Swedish researchers tried a new method of connecting a prosthetic limb directly to a patient’s nervous system. This new method required dissecting the end portions of whole nerves from the residual limb into fascicles, or small bundles of nerve fibres, and wrapping them with muscle tissue taken from somewhere else in the body. Once the nerves grew into the muscles, researchers put electrodes to record in real-time which nerve signals were coming from each fascicle. Those signals are then used to control the prosthetic arm. Two years later, everything works as planned. “Currently, he [the patient] can open and close the hand, rotate the hand, flex and extend the elbow, all by thinking about it”, said one of the researchers involved in the study.
▶️ Storing dead people at -196°C (5:35)
Tom Scott visits Tomorrow Bio - a Swiss company offering its clients a chance of living again by freezing their bodies after death in the hope that technology and advancements in medicine will be able to bring them back to life. The CEO of Tomorrow Bio explains what the process of preserving the body looks like, from the cryonics process to securing funds to keep the bodies safely stored for years and decades. If you want to learn more about cryonics, then check out this article I wrote about it.
How Old Can Humans Get?
How long can a human live? 100 years? 120 years? According to João Pedro de Magalhães, a professor of molecular biogerontology at the Institute of Inflammation and Ageing at the University of Birmingham, UK, humans could live for 1000 years. In this interview with Scientific American, Magalhães shares the insights from his research into the biology of some very long-lived animals that open the possibility of improving our biology to be better at repairing DNA and eliminating cancer, thus increasing our lifespan into hundreds and maybe even thousands of years.
🧠 Artificial Intelligence
GitHub CEO says Copilot will write 80% of code “sooner than later”
In this interview, Thomas Dohmke, the CEO of GitHub, shares his vision of programming enhanced by AI tools such as GitHub Codex. In his opinion, programmers will not go away, but their work will shift from writing code to a more system design approach. And when they have to write code, tools like Codex will make them more efficient, says Dohmke. He also shares his vision of AI tools opening programming to more people than before, changing how innovation happens, and how Codex brings back the joy of coding for him.
Frontier Model Forum
OpenAI, in collaboration with Anthropic, Google, and Microsoft, has announced the establishment of the Frontier Model Forum - an industry body focused on ensuring the safe and responsible development of frontier AI models. The Forum's primary objectives are to advance AI safety and promote responsible development practices for frontier models. It aims to share knowledge and best practices with policymakers, academics, civil society, and others to foster responsible AI development. Additionally, the Forum will support efforts in applying AI in addressing society's biggest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Hugging Face, GitHub and more unite to defend open source in EU AI legislation
A coalition of open-source AI stakeholders, including Hugging Face, GitHub, EleutherAI, Creative Commons, LAION, and Open Future, is urging EU policymakers to protect open-source innovation in the EU AI Act (to learn more what the EU is proposing, check out this article) and to avoid hindering open-source AI innovation with regulations. According to the coalition, “overbroad obligations” that favour closed and proprietary AI development — like models from top AI companies such as OpenAI, Anthropic and Google — “threaten to disadvantage the open AI ecosystem.”
OpenAI can’t tell if something was written by AI after all
OpenAI has shut down its tool designed to detect if the text was generated by an AI or written by a human. The company stated that the tool is "no longer available due to its low rate of accuracy" (the link to the detector's page now leads to a 404 page). OpenAI has made a commitment "to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated," but did not elaborate further on that.
Netflix Reality Show ‘Tortures’ Contestants With Deepfaked Photos of Their Partners Cheating
Netflix has a new reality show in which five couples are split up and then put into two separate houses for the show together with some number of single people. Then, at the end of each day in the houses, they will be presented with videos of their partner cheating on them. The twist is that some of those videos will be deepfaked and they will have to guess whether what they see is true or not. Apparently, the participants were not informed the videos will be manipulated. The show already has caused controversy with some comparing it to a real-life episode of Black Mirror.
🤖 Robotics
Waymo puts the brakes on self-driving trucks program
Waymo is closing Waymo Via, its self-driving truck program, to focus on relatively easier and more profitable ride-hailing services through Waymo One. According to Tech Crunch, “the vast majority of employees on Waymo’s trucking team have taken other roles within the company”. Despite this shift, Waymo says its partnership with Daimler Truck North America to develop an autonomous truck platform will remain intact, albeit at a slower pace. Other partnership programs, such as those with UPS and J.B. Hunt, have ended.
Boston Dynamics’ Founder on the Future of Robotics
IEEE Spectrum speaks with Marc Raibert, the founder of Boston Dynamics, where he shares his long-term vision of what robotics can be and how Boston Dynamics AI Institute, the Bell Labs of robotics as he described it, can help advance robotic manipulation and change people’s perception of robotics.
The In-Credible Robot Priest and the Limits of Robot Workers
Inside a 400-year-old Kodai-ji Temple in Kyoto, Japan, a robotic priest welcomes visitors and conducts sessions of Buddhist teachings, raising questions about the limits of automation. This robot inspired a couple of experiments to check if there is a difference when people see a robot versus a human monk. And yes, there is a difference in favour of humans. In this article, researchers behind the experiments explain the results and point out what humans have but robots don’t - credibility. They argue that in the future, where robots are becoming increasingly capable, those who can maintain credibility may have higher chances of thriving in a highly automated world.
H+ Weekly sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who bought me a coffee on Ko-Fi. Thank you for the support!
You can follow H+ Weekly on Twitter and on LinkedIn, or join our Discord server.
Thank you for reading and see you next Friday!
Conrad, great issue! I confidently predict that Gen Five will be the last designed exclusively by humans. From Gen Six onward they will increasingly design their successors. Long before Gen Seven, they will start helping us redesign ourselves.