AI that can model and design the genetic code for all domains of life - Sync #507
Plus: Grok 3; Figure shows its in-house AI model for humanoid robots; HP acquires Humane AI’s IP; Meta explores robotics and announces LlamaCon; Microsoft's quantum chip; and more!
Hello and welcome to Sync #507!
For this week’s main story, I have chosen to highlight Evo 2, the latest AI model from Arc Institute that can model and design the genetic code for all domains of life.
Elsewhere in AI, xAI has released its latest model, Grok 3, while looking to raise $10 billion for a $75 billion valuation. In other AI news, Mira Murati, former OpenAI CTO, has revealed her new AI startup, HP has acquired Humane AI’s IP, Meta has announced LlamaCon 2025, and OpenAI is attempting to “uncensor” ChatGPT while expanding Operator to more countries.
Over in robotics, Figure has unveiled Helix, its in-house developed Vision-Language-Action model for humanoid robots. We also have the latest research on human-robot collaboration from Meta, along with hints at a humanoid robotics project at the company.
Beyond that, this week’s issue of Sync also features how living electronics could heal bodies and minds, Microsoft’s latest quantum computing chip, a conversation with Jeff Dean and Noam Shazeer—two of the most influential figures in AI research—on their work at Google, and how Sam Altman sidestepped Elon Musk to win over Donald Trump.
Enjoy!
AI that can model and design the genetic code for all domains of life
AI has many exciting applications, but the one that fascinates me most is its use in biology. Biological systems are incredibly complex, with countless interacting mechanisms, making understanding what is going on at the level of cells, proteins or DNA challenging.
The introduction of computational biology and machine learning solutions greatly helped us understand biological systems. With AlphaFold, we can now predict the shape of proteins just from their DNA sequence. We used that technology to create a database of 200 million protein structures for researchers to use. Meanwhile, many startups use AI tools to find candidates for drugs or just use AI to generate these drugs.
This week, we got a new tool in the computational biology toolkit—Evo 2, developed by researchers at Arc Institute, a research lab focusing on solving biological problems with applications in biomedical research. Arc Institute was founded in 2021 by Stanford University biochemistry professor Silvana Konermann, UC Berkeley bioengineering professor Patrick Hsu, and Stripe CEO Patrick Collison.
Evo 2 is a biological foundation model designed for genomic modelling, prediction, and sequence generation. Trained on 9.3 trillion DNA base pairs across bacteria, archaea, eukaryotes, and bacteriophages, Evo 2 is one of the largest genome-scale AI models. Available in 7 billion (7B) and 40 billion (40B) parameter versions, it features a 1 million-token context window, enabling it to analyse long-range genomic interactions at single-nucleotide resolution.
Unlike previous models, Evo 2 does not require task-specific fine-tuning, yet it accurately predicts the impacts of genetic variations, from noncoding pathogenic mutations to BRCA1 variants. It also generates genomic sequences with high naturalness, enhancing synthetic biology and genome design.
Evo 2 is one of the most advanced biological models available. It can accurately identify disease-causing mutations in human genes and is capable of designing new genomes as long as those of simple bacteria, opening new possibilities in bioengineering and biomedical research.
Additionally, Arc Institute made Evo 2 open source. The code as well as the model’s weights, training code, inference code and OpenGenome2 dataset are publically available on GitHub. The Evo 2 40B version is also available on the Nvidia NIM platform and ready to use within minutes.
On top of that, the researchers behind Evo 2 have provided web tools for sequence generation and interpretability, making advanced genomic modelling accessible to researchers. Evo Designer allows users to generate and design DNA sequences, enabling applications in synthetic biology and genome engineering. Evo Mechanistic Interpretability helps users explore the model’s learned features, such as exon-intron boundaries, transcription factor motifs, and protein structures, providing insights into genomic function. These tools enhance the usability of Evo 2, enabling scientists to analyse, modify, and design biological sequences in an intuitive and interactive way.
The fact that Evo 2 is open-source might raise questions about the safety of giving away such a powerful model. Researchers thought about that and implemented multiple safety measures to mitigate potential risks. For example, Evo 2 was trained without including eukaryotic virus genomes, ensuring that it cannot be used to design or manipulate pathogenic human viruses. This was confirmed through extensive testing, which showed that Evo 2 has poor performance when generating viral protein sequences. Additionally, the model was evaluated for potential biases in human genomic predictions to ensure fair and unbiased results across diverse populations. These measures aim to balance the benefits of open science with responsible risk management.
For more information about Evo 2, I recommend reading the paper describing the model. And speaking of the paper, if you check the list of authors, you’ll see Greg Brockman, the co-founder and president of OpenAI, among them.
It turns out that Brockman used his sabbatical leave from OpenAI to work on Evo 2, contributing to the development of algorithms that process and analyse the massive dataset the model was trained on. His work enabled Evo 2 to be trained with 30 times more data than Evo 1 and to reason over eight times as many nucleotides at a time. I’ve gained a new level of respect for Brockman for doing that.
Evo 2 represents a major advancement in genomic AI, combining prediction, interpretability, and sequence design into a single model. With state-of-the-art mutation effect prediction, genome-scale sequence generation, and being open-source, Evo 2 sets the foundation for AI-driven biological discovery and synthetic life design.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🧠 Artificial Intelligence
Elon Musk’s xAI releases its latest flagship model, Grok 3
xAI has released its latest flagship model, Grok 3. Currently in early preview, the new Grok comes in three versions—Grok 3, Grok 3 Mini, and Grok 3 (Think). The company claims that Grok 3 excels in mathematics, coding, world knowledge, and instruction-following, outperforming GPT-4o and other competitors on key benchmarks such as AIME 2025 (93.3% accuracy) and GPQA (84.6% accuracy). Grok 3 also introduces "Think" mode, enabling deep reasoning, backtracking, and multi-step problem-solving, similar to how OpenAI’s o-models or DeepSeek R1 work. Additionally, it features DeepSearch, an AI-powered research tool that synthesises real-time internet knowledge. Grok 3 is available for X Premium+ subscribers (whose subscription price doubled just hours after the Grok 3 announcement) and will be accessible via xAI’s API platform in the coming weeks.
How Sam Altman Sidestepped Elon Musk to Win Over Donald Trump
It was somewhat surprising that the biggest AI project in the US, the Stargate Project, came not from Elon Musk but from Sam Altman and OpenAI. This article explores how Sam Altman strategically outmanoeuvred Elon Musk to gain favour with President Trump and position OpenAI at the centre of the new administration’s AI policies. His ability to shift political allegiances, quietly engage with Trump’s team, and align OpenAI’s ambitions with Trump’s infrastructure and economic agenda proved crucial in sidelining Musk.
Musk’s xAI Discussing $10 Billion Raise at $75 Billion Valuation
xAI, Elon Musk's artificial intelligence company, is seeking $10 billion in funding at a $75 billion valuation. Investors in discussions include Sequoia Capital, Andreessen Horowitz, and Valor Equity Partners. In May and December 2023, xAI raised $6 billion in two separate funding rounds, bringing the company’s valuation to $51 billion.
Thinking Machines Lab is ex-OpenAI CTO Mira Murati’s new startup
Mira Murati, former CTO at OpenAI, has unveiled her new AI startup, Thinking Machines Lab. The company aims to develop customizable multimodal AI systems that collaborate with humans and adapt to diverse expertise levels. Additionally, it plans to focus on AI safety, preventing model misuse, and openly sharing code, datasets, and best practices to support AI alignment research. “The goal is simple: advance AI by making it broadly useful and understandable through solid foundations, open science, and practical applications,” Murati wrote in a tweet announcing the new company.
OpenAI tries to ‘uncensor’ ChatGPT
In its latest Model Spec document, OpenAI announced a shift in its AI model training to emphasise intellectual freedom, allowing ChatGPT to engage with more challenging and controversial topics. The goal is for ChatGPT to answer more questions, provide multiple perspectives, and reduce restrictions on certain topics. OpenAI’s move aligns with Silicon Valley's evolving stance on AI safety, where open discussions are increasingly favoured over strict content moderation. While OpenAI denies that these changes are intended to appease the Trump administration, the timing raises questions.
OpenAI rolls out its AI agent, Operator, in several countries
OpenAI is rolling out Operator, an AI agent that can control a web browser and perform tasks on behalf of users—such as booking tickets, making reservations, filing expense reports, and shopping online—to ChatGPT Pro subscribers in multiple countries, including Australia, Brazil, Canada, India, Japan, Singapore, South Korea, and the UK. Operator will not be available in the EU, Switzerland, Norway, Liechtenstein, or Iceland. Initially launched in the US in January, Operator is only accessible to ChatGPT Pro subscribers, who pay $200 per month.
Meta announces LlamaCon 2025
Meta has announced LlamaCon, a developer conference that promises to share the latest on its open-source AI developments. The conference is scheduled for April 29th, 2025. This event may serve as a perfect opportunity for Meta to release Llama 4, especially since it's occurring almost exactly a year after the release of Llama 3.
HP acquires Humane AI’s IP
Humane, the creators of the failed AI Pin that promised to replace smartphones with an AI assistant device, is no more. For $116 million, HP has acquired the company, its team, and its intellectual property, which includes more than 300 patents and patent applications.
OpenAI’s Sora Filmmaking Tool Meets Resistance in Hollywood
OpenAI has been in discussions with major film studios, including Disney, Universal, and Warner Bros., about using its AI video generator, Sora, for filmmaking. However, despite these talks, no agreements have been reached due to studios' concerns about data usage, legal risks, and potential conflicts with labour unions. According to the article, studios are interested in AI’s potential but remain cautious about working with tech companies, fearing a loss of control over their intellectual property, as seen with previous shifts to YouTube, Netflix, and Meta. Additionally, there is no clear agreement on how AI-generated content would be monetised or how revenue would be shared with filmmakers and actors entitled to profit participation.
US Copyright Office rules out copyright for AI created content without human input
The US Copyright Office (USCO) has published its second report on the relationship between copyright and AI. The report states that works created entirely by AI with no human involvement cannot be copyrighted. In other words, simply entering text prompts into an AI service, no matter how complex or sophisticated, does not make the output copyrightable. However, films or other complex works that use AI tools to enhance pre-existing content may still qualify for copyright. The key factor in determining copyright protection is the level of human contribution. The report also distinguishes between assistive AI tools (e.g., ageing or de-ageing actors, object removal in films) and generative AI tools. The former does not limit copyright protection, while the latter requires further legal analysis.
▶️ NVIDIA CEO Jensen Huang's Vision for the Future (1:03:03)
In this video, Cleo Abram speaks with Jensen Huang, CEO of Nvidia. The conversation begins with a brief history of computing and Nvidia before exploring the future possibilities of AI and how it will transform our lives. As Huang noted, the past 10 years were about the creation and discovery of AI, while the next decade will focus on applying that knowledge to every aspect of our lives. I liked how this discussion was framed around optimism about the future and the positive impact technology can have.
Face readers
We are not always the best at understanding how animals feel, but AI can help. This article highlights startups and researchers developing AI models that can detect not only pain and stress from pictures of animals but also emotions such as happiness, frustration, and disappointment. These tools could usher in a new era of animal care, prioritising their health, welfare, and protection.
China's Position in AI & BigTech
If you are interested in learning more about the Chinese tech ecosystem, I recommend listening to this conversation
▶️ Jeff Dean & Noam Shazeer – 25 years at Google: from PageRank to AGI (2:15:35)
This is a fascinating conversation with Jeff Dean and Noam Shazeer—two of the most influential figures in AI research—spanning from the early days of Google to current AI projects and what we can expect in the coming years. The discussion covers hardware advancements, the impact of specialised computing (e.g., TPUs), large-scale language models, the potential for self-improving models, the pros and cons of open research, and the future of AI-assisted coding and research. They also explore what the next breakthrough in AI might look like, envisioning more organic, evolving, and flexible AI architectures. The conversation is over two hours long but worth the time.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
Helix: A Vision-Language-Action Model for Generalist Humanoid Control
About two weeks ago, Figure announced they are dropping OpenAI’s models in favour of an in-house model. Now, the company has revealed its new AI model, Helix. Helix is a Vision-Language-Action (VLA) model that integrates perception, language understanding, and robotic control. According to Figure, Helix enables full upper-body control of humanoid robots, including fingers, wrists, torso, and head. Additionally, it is the first VLA to support multi-robot collaboration, allowing two robots to work together on long-horizon tasks. Figure also highlights speed, generalisation, scalability, and simplicity as advantages of Helix over previous models. Helix is production-ready and runs on low-power GPUs, making it deployable in real-world applications.
Robotics Startup Figure AI in Talks for New Funding at $39.5 Billion Valuation
Figure, one of the leading companies developing humanoid robots, is in talks with investors to raise $1.5 billion at a valuation of $39.5 billion. As Bloomberg notes, the negotiations are ongoing, and the details of the deal may change. In February last year, Figure raised $675 million in Series B funding, which included Nvidia, Microsoft, OpenAI, and others as investors.
Meta Plans Major Investment Into AI-Powered Humanoid Robots
Meta is another big tech company taking a serious look at humanoid robotics. Mark Gurman reports that Meta plans to develop its own humanoid robot hardware, with a broader goal of creating AI, sensors, and software for humanoid robots that can serve as a foundation for other companies to build their products. According to Gurman, Meta is assembling a team led by Marc Whitten, a former CEO of GM’s Cruise, and is looking to hire around 100 engineers by the end of this year for its humanoid robotics project.
▶️ Meta PARTNR: Unlocking Human-Robot Collaboration (3:01)
Researchers from Meta have introduced PARTNR, a research framework that includes a large-scale benchmark, dataset, and a large planning model designed to aid in building and training AI agents for controlling robots in household tasks. PARTNR comprises 100,000 natural language tasks spanning 60 houses and 5,819 unique objects, aimed at studying multi-agent reasoning and planning. It also includes Habitat, which provides high-fidelity, multi-room, interactive 3D environments that mimic real-world homes for AI training and evaluation. PARTNR is open-source and available on GitHub.
How Apptronik is accelerating the humanoid robot race
In this podcast, the guys from The Robot Report sat down with Jeff Cardenas, CEO and co-founder of Apptronik, one of the leading companies in the humanoid robotics space. It is an insightful conversation in which Cardenas predicts that 2025 will be the year of robotics, as all the necessary components—hardware, software, and AI—are coming together. He also discusses the recently announced partnership with Google DeepMind, explains why designing and building humanoid robots requires a different approach compared to traditional industrial robots, and shares insights on how humanoid robots can extend beyond industry, logistics, and manufacturing.
This Autonomous Drone Can Track Humans Through Dense Forests at High Speed
SUPER is a new micro air vehicle developed by a team from the University of Hong Kong that is significantly more manoeuvrable and reliable than any commercial drone. By combining lidar technology with a unique two-trajectory navigation system to balance safety and speed, SUPER outperformed commercial drones in speed, tracking, and obstacle avoidance while flying at speeds over 20 metres per second (45 mph). The drone achieved a near-perfect success rate of 99.63% in avoiding obstacles at high speed—36 times better than the best alternative drone tested. Additionally, SUPER successfully tracked a jogging person in a dense forest, whereas another commercial drone lost the target.
🧬 Biotechnology
How living electronics could heal bodies and minds
Bioelectronics is a fascinating field that aims to seamlessly merge technology with human biology. It can actively regulate gut microbiomes, enhance cognition, and monitor health in real-time. Bioelectronics can also be used in neural implants to unlock hidden cognitive potential and improve sensory perception, as well as in treating infections, autoimmune diseases, and even cancer. In this podcast, Bozhi Tian of the University of Chicago explores the potential of bioelectronics in medicine and beyond, including energy production, pollution mitigation, and even its applications in art and architecture.
A bacteria-based Band-Aid helps plants heal their wounds
By using patches of bacterial cellulose, researchers from Spain have created a "Band-Aid for plants" that helps plants heal wounds faster and more effectively. These patches can be used in agriculture, grafting, preserving cut plant material, or as a growth medium in laboratories.
💡Tangents
Microsoft unveils Majorana 1, the world’s first quantum processor powered by topological qubits
Microsoft has announced Majorana 1, which it claims to be the world’s first Quantum Processing Unit (QPU) with a Topological Core. Designed to scale up to a million qubits on a single chip, it is built using topoconductors—a new class of materials that enable topological superconductivity. Microsoft has also introduced a measurement-based approach to quantum computation, simplifying quantum error correction (QEC). Additionally, the company has been selected for the final phase of DARPA’s US2QC programme, whose goal is to determine whether an underexplored approach to quantum computing can achieve utility-scale operation much faster than conventional predictions.
MIT team takes a major step toward fully 3D-printed active electronics
MIT researchers have demonstrated a new method for 3D-printing active electronic components without the need for semiconductors. They successfully created fully 3D-printed resettable fuses using a copper-doped polymer, enabling basic circuit components similar to semiconductor transistors. While this technology is not yet ready to compete with traditional silicon-based electronics, it represents a step towards decentralised and more sustainable electronics manufacturing. This breakthrough could open new possibilities for fabricating functional devices in remote locations, laboratories, and even in space, reducing reliance on specialised semiconductor fabrication facilities.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
The speed at which AI is transforming biology is honestly mind-blowing. Evo 2’s ability to model and design genetic code across all domains of life feels like a glimpse into the future of bioengineering. The fact that it doesn’t even need task-specific fine-tuning to predict mutation effects is wild this could seriously change how we approach genetic diseases. Making it open-source is a bold move. Feels like we’re on the edge of something huge.