Google removes pledge to not use AI for weapons - Sync #505
Plus: OpenAI Deep Research; Gemini 2.0 Pro; Figure is dropping OpenAI models; AI agents are coming to GitHub Copilot; new Alexa with Claude is coming; human-like “teeth” have been grown in mini pigs
Hello and welcome to Sync #505!
We’ve had another intensive week in the world of tech. For the main story, we will talk about Google removing the pledge to not use AI for weapons and how it fits into broader trends of AI and tech companies finding a way to work closer with the military and government.
Elsewhere in AI, OpenAI released Deep Research, a new agent within ChatGPT designed to autonomously perform multi-step research tasks on the internet. Meanwhile, Google released two new models—Gemini 2.0 Flash-Lite and Gemini 2.0 Pro in preview mode. Additionally, Amazon is gearing up to release a revamped Alexa powered by Anthropic’s Claude, GitHub is bringing agentic features to Copilot, and the US is tightening its grip on AI chip flows across the globe.
Over in robotics, Figure has dropped OpenAI models in favour of its own and expects to deliver 100,000 humanoid robots over the next four years. Apple, meanwhile, has unveiled its first robotics project from its research labs—essentially a real-life Pixar lamp. Researchers from Carnegie Mellon University and Nvidia have also made humanoid robots move like humans.
We also have a new spin-off from Google X aiming to transform agriculture with machine learning, lab-grown dog food now on sale in the UK, and AI generating a new protein that would have taken 500 million years to evolve naturally. Additionally, human-like “teeth” have been grown in mini pigs, and the UK fertility watchdog has found that lab-grown eggs or sperm are on the brink of viability.
Enjoy!
Google removes pledge to not use AI for weapons
For a long time, Google followed the principle of “Don’t be evil.” It was a response to the tech giants of the time (mainly Microsoft), demonstrating to the public that Google was a different kind of company—one that prioritised its mission of organising humanity’s knowledge above all else. In a world of ruthless corporations, Google positioned itself as a quirky, nerdy company, and the “Don’t be evil” slogan was part of that narrative. However, in 2018, Google quietly dropped the slogan and, over time, transformed itself from a scrappy company building its first server racks from Lego bricks into the tech behemoth it is today.
Today, the new crop of AI companies is doing the same as Google did in its early years. A quick look at their mission statements or guiding policies can reveal often idealistic slogans such as “building AI that can benefit all of humanity” or an emphasis on the ethical use of their state-of-the-art AI models.
And it is Google once again that reminds us to take these statements with a massive grain of salt, as the tech giant has quietly removed from its AI Principles a pledge not to build AI for weapons or surveillance. First spotted by Bloomberg and later picked up by the rest of the tech media, the page outlining Google’s AI Principles previously included a passage titled “AI applications we will not pursue,” which listed examples such as “technologies that cause or are likely to cause overall harm.” Now, this passage is gone.
And it is Google once again that reminds us to take these statements with a massive grain of salt, as the tech giant has quietly removed from its AI Principles a pledge not to build AI for weapons or surveillance. First spotted by Bloomberg and later picked up by the rest of the tech media, the page outlining Google’s AI Principles previously included a passage titled “AI applications we will not pursue,” which listed examples such as “technologies that cause or are likely to cause overall harm.” Now, this passage is gone.
The same day Bloomberg spotted the change, Google published a post titled Responsible AI: Our 2024 report and ongoing work, which responds to Bloomberg’s article and provides a PR-friendly justification for these changes. As Google writes:
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Not everyone is happy about these changes and the possibility of their work being used to develop weapons or decide who to kill. Additionally, as Bloomberg notes in its article, this change is eroding the trust some AI researchers had in the company. “They asked us to deeply interrogate the work we were doing across each of them,” said Tracy Pizzo Frey, who oversaw what is known as Responsible AI at Google Cloud from 2017 to 2022, in a message to Bloomberg. “And I fundamentally believe this made our products better. Responsible AI is a trust creator. And trust is necessary for success.”
Others, however, are pleased to see Google changing its stance on how its AI models can be used. Andrew Ng, renowned AI researcher, founder and former leader of Google Brain, and now an AI educator and investor, welcomed the news of Google opening its AI for military use. “I’m very glad that Google has changed its stance,” Ng said during an onstage interview with TechCrunch at the Military Veteran Startup Conference in San Francisco. Ng views these self-imposed restrictions as obstacles to innovation and also sees a moral argument for allowing the military to use advanced AI models. “So how the heck can an American company refuse to help our own service people that are out there, fighting for us?” Ng said at the same conference.
Proponents of allowing AI companies to work with the military and weapon manufacturers see the potential of AI, drones, and robots to provide a technological advantage on the battlefield, minimise casualties, and make military operations more efficient. They can point to concrete examples of this, as AI is already being used by militaries across the globe. Israel employs multiple AI systems, such as Lavender and Habsora, to assist in identifying targets in its ongoing conflict with Hamas in Gaza. Ukraine has become a vast testing ground for military drones, robots, and AI systems. Militaries from other countries are closely monitoring these developments to learn what works and what does not.
AI companies are watching and listening too, always looking for new markets and customers. Governments and the military are among the most valued clients, with the potential to spend millions, if not billions, on the most advanced AI and robotics systems. Google is just the latest tech company to open itself to military collaboration. OpenAI made a similar move a little over a year ago, paving the way for a partnership with Anduril in December 2024. Anthropic is working with Palantir to enable US intelligence and defence agencies to access its Claude models. Cohere, another AI company, has also struck a similar deal with Palantir.
Those partnerships, however, often conflict with the principles upon which the new wave of AI companies was built. Anthropic, for example, explicitly forbids the use of its models to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials, or other systems designed to cause harm to or loss of human life,” according to its Usage Policy. OpenAI famously claims to be building AI to “benefit all of humanity” and, in system cards for its models, highlights how effectively they refuse to assist in developing weapons. Yet, in recent announcements—such as those regarding its Economic Blueprint and the Stargate Project—OpenAI has spoken about bolstering the national security of the US and its allies, hinting at a willingness to work with the military. Google’s recent decision to remove similar language from its communications is just another example of this growing trend in the AI and tech industry.
Those idealistic statements are, in the end, just that—idealistic statements. They crumble upon contact with geopolitical reality, which has made AI and the tech industry a major source of advantage in the competition between states.
Silicon Valley has always been deeply intertwined with US military and government contracts. In fact, Silicon Valley was built on government and military funding—an uncomfortable truth that many people either don’t know or prefer to ignore. I was among them until I recently became more interested in the history of technology and how Silicon Valley came to be. Reading What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry by John Markoff or watching The Secret History of Silicon Valley by Steve Blank provides a clear understanding of the military and government’s significant role in shaping both Silicon Valley and the US tech industry as a whole.
Silicon Valley thrives on myths of visionary founders starting in garages with nothing but a revolutionary idea and sheer determination to bring it to life. However, for many of those visionary entrepreneurs, their first customer was the US military or a military contractor. Others secured their initial rounds of funding through government projects, often with military applications in mind.
In some way, by openly embracing military contracts, Silicon Valley is coming back to its roots.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
Technology for lab-grown eggs or sperm on brink of viability, UK fertility watchdog finds
The UK’s Human Fertilisation and Embryology Authority (HFEA) suggests in a recently published report that in-vitro gametes (IVGs)—lab-grown sperm and eggs derived from skin or stem cells—could become viable within a decade. IVGs offer the potential to remove age barriers to conception, enable same-sex couples to have biological children, and provide a new treatment for infertility. However, this technology carries significant risks and raises ethical concerns. IVGs are currently prohibited under UK law, and proving their safety presents major challenges. The HFEA recommends strict regulation to prevent biologically dangerous uses of IVGs.
Humanlike “teeth” have been grown in mini pigs
What if, instead of using titanium implants and dentures, we could grow new teeth? A recent study is bringing that vision closer to reality, as a pair of researchers have successfully grown bioengineered tooth structures by culturing a mix of pig and human tooth cells in pig tooth fragments. Their experiments, which included implanting these structures into mini pig jaws, showed promising results, as the teeth developed natural layers like dentine and cementum. While the technology is still in its early stages, researchers are optimistic about creating lab-grown, fully functional, living tooth replacements in the future.
Advancing wearable robotics through open-source innovation
Researchers at the University of Twente have released CEINMS-RT, an open-source platform for wearable robotics that aims to bridge the gap between human intent and robotic actions. CEINMS-RT enables real-time neuro-mechanical model-based control of movement-assistive robots, such as exoskeletons, exosuits, and bionic limbs. The project is available here.
🧠 Artificial Intelligence
OpenAI has introduced "Deep Research," a new agent within ChatGPT designed to autonomously perform multi-step research tasks on the internet. Users can provide a prompt, and the agent will find, analyse, and synthesise information to generate comprehensive reports. According to OpenAI, Deep Research scores 26.6% on Humanity’s Last Exam benchmark—twice as much as o3-mini in high-compute mode—while also setting new state-of-the-art results on other benchmarks. Currently, Deep Research is available to ChatGPT Pro subscribers at $200 per month, with a limit of 100 queries per month. OpenAI plans to expand access to Plus, Team, and Enterprise users in the future.
Gemini 2.0 is now available to everyone
Google announced the release of Gemini 2.0 Pro, claiming that the new AI has the strongest coding performance and the ability to handle complex prompts, with improved understanding and reasoning of world knowledge compared to any of its previous models. However, the benchmarks provided by Google compare the new Gemini 2.0 Pro only to other models in the Gemini family, so it is worth waiting for independent benchmarks and comparisons against competitors. Additionally, Google is also releasing Gemini 2.0 Flash-Lite, the most cost-efficient model in the Gemini family, which offers better responses than Gemini 1.5 Flash at the same speed and price.
SoftBank Nears Deal to Acquire Chip Designer Ampere
Bloomberg reports that SoftBank is in advanced talks to acquire Ampere, an AI chip company, for $6.5 billion. SoftBank already has one chip company in its portfolio—Arm—whose technology is being used by Ampere to manufacture its chips. It is worth noting that Oracle owns 29% of Ampere and that Oracle and SoftBank, together with OpenAI, are leading the recently announced Stargate Project, a $500 billion plan to boost the US AI infrastructure. Additionally, SoftBank is reportedly in discussions to invest between $15 billion and $25 billion in OpenAI, so if the deal goes through, OpenAI could potentially gain easier access to Ampere’s chips or its expertise in chip-making via SoftBank.
Amazon's AI revamp of Alexa assistant nears unveiling
Reuters reports that Amazon is gearing up to release a revamped version of Alexa, powered in part by AI provided by Anthropic, which might include some agentic features. The reveal of the refreshed Alexa is scheduled for 26 February in New York, assuming there are no objections from executives.
GitHub Copilot: The agent awakens
GitHub is upgrading GitHub Copilot with new agentic AI capabilities. Available in preview, Agent Mode allows Copilot to iterate on its own code, detect and fix errors automatically, and analyse terminal commands. It can also identify failing tests and suggest fixes, making debugging more efficient. Developers can now choose from new AI models, including OpenAI’s o1 and o3-mini, Anthropic’s Claude 3.5 Sonnet, and Google’s Gemini 2.0 Flash. Additionally, GitHub has introduced Project Padawan, an autonomous AI-driven software engineering agent that will help developers by automating issue resolution, generating tested pull requests, and handling code reviews.
US tightens its grip on AI chip flows across the globe
The US government has introduced stricter regulations on exporting AI chips and technology, aiming to maintain US dominance in AI while blocking access to China, Russia, Iran, and North Korea. AI chip exports are now capped for around 120 countries, with varying levels of restrictions. Eighteen US allies, such as Japan, Britain, and the Netherlands, are exempt from these new limits. Countries like Singapore, Israel, Saudi Arabia, and the UAE will face export quotas, while arms-embargoed nations, including China, Russia, and Iran, are completely barred from receiving AI chips. China’s Commerce Ministry condemned the restrictions and pledged to protect its interests. The move is expected to further escalate US-China tech tensions in the AI sector.
'Digital doppelgangers' are helping scientists tackle everyday problems—and showing what makes us human
Digital twins is a concept that is gaining popularity and promises to unlock new efficiencies in various fields. This article explores the use of digital human twins in medicine, sports, and business. If you want to learn more about digital twins, I have written an article about them (linked below).
AIs and Robots Should Sound Robotic
AI voice synthesis has become so advanced that it is almost impossible to distinguish an artificial voice from a real human voice. This raises the risk of AI-generated voices being used to deceive people. This article proposes a simple solution to detecting AI-generated voices—making them sound robotic. The authors argue that this approach is straightforward and easy to implement, offering another method on top of labelling AI-generated voices to quickly hear if you are talking with a robot.
What fully automated firms will look like
In this post,
OpenAI’s new trademark application hints at humanoid robots, smart jewelry, and more
OpenAI has recently filed a new trademark application with the USPTO for its brand name “OpenAI,” which may hint at new products or at least indicate the products the company is considering. The application lists AI-powered consumer hardware, including headphones, goggles, smartwatches, smart jewellery, VR/AR headsets, and laptop/phone cases, as well as "user-programmable humanoid robots" for assistance and entertainment. However, it is worth noting that a trademark filing does not confirm concrete product plans—it merely indicates areas OpenAI is exploring. Trademark applications are often broad and do not guarantee actual product releases.
Anthropic has a new way to protect large language models against jailbreaks
Anthropic has developed a new system to block jailbreak attacks that trick LLMs into bypassing their safety mechanisms, which the company claims may be the most robust defence yet against such attacks. Rather than modifying the model itself, Anthropic has created an external filter to detect and block jailbreak attempts. Tests showed that this shield reduced successful jailbreak attempts from 86% to 4.4%. However, it occasionally blocks benign queries, such as basic biology or chemistry questions, and increases computational costs by 25%.
It’s always fascinating to learn about progress in AI from a true expert in the field. Demis Hassabis is, without a doubt, one of the leading figures in AI. In this conversation, he provides a comprehensive overview of the state of AI, how close or far we are from AGI, and the next breakthroughs in AI research and deployment. The discussion also explores creativity and invention in AI, how these systems can deceive humans, and how Hassabis believes AI will help drive new scientific breakthroughs.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
Figure drops OpenAI in favor of in-house models
Figure AI, one of the leading companies in the emerging field of commercial humanoid robots, is exiting its collaboration agreement with OpenAI in favour of using in-house AI models to control its humanoid robots, CEO Brett Adcock announced on X. Apart from describing the new models as a major breakthrough, no further details were disclosed. Adcock promised to reveal “something no one has ever seen on a humanoid” within the next 30 days. OpenAI has been a longtime investor in Figure.
Figure Plans To Ship 100,000 Humanoid Robots Over Next 4 Years
Figure has signed a commercial deal with “one of the biggest US companies,” CEO Brett Adcock revealed on LinkedIn. Figure already has a partnership with BMW for the carmaker to use its humanoid robots in a factory in South Carolina. Adcock expects to deliver 100,000 humanoid robots over the next four years.
Researchers from Carnegie Mellon University and Nvidia have introduced ASAP (short for Aligning Simulation and Real Physics), a two-stage framework designed to enable agile movements for humanoid robots. The results are impressive, and I highly recommend watching the video above or visiting the project’s website. The robots move agilely and dynamically, performing jumps, squats, dances, and more, very close to how humans would perform them.
Apple just built an adorable robot lamp, a sneak peek into robotics work
Rumours about Apple potentially venturing into robotics have been circulating for a while, but Apple kept everything secret. Now, researchers from Apple have shown us one project they have been working on. ELEGANT is a research initiative exploring expressive and functional movements of a robotic lamp that a human can easily and intuitively control. To test their ideas, the researchers built a robotic lamp that looks and behaves like a real-life Pixar lamp.
Figure AI details plan to improve humanoid robot safety in the workplace
The focus on workplace automation with humanoid robots sometimes places safety in the background. With the creation of the Centre for the Advancement of Humanoid Safety, Figure aims to prioritise safety and become the industry leader in safe humanoid robots. The newly formed organisation’s mission is to establish rigorous safety validation standards and build public trust in humanoid robot safety through testing and certification. The Centre for the Advancement of Humanoid Safety is committed to transparency in its safety efforts by sharing updates on safety status, testing plans, and progress.
Amazon unveils location of first planned Prime Air drone delivery in the UK
Amazon has announced plans to expand its Prime Air drone delivery service to Darlington in northern England. According to the company’s announcement, significant work remains before residents can receive orders via drone, including obtaining permission from local authorities and securing authorisation from the Civil Aviation Authority (CAA) to operate in the airspace. Amazon has not disclosed when it expects the drone delivery service in Darlington to launch.
Using drones to improve wildfire and forest management
The devastating fires in California have led many to question how such disasters can be prevented in the future. One potential solution for improving wildfire and forest management is the use of drones. In this podcast, the hosts of The Robot Report Podcast welcome Erin Linebarger, co-founder and CEO of Robotics 88, to discuss how autonomous drones can help manage forests and reduce the risk of catastrophic fires, as well as the unique challenges associated with this application of drone technology.
🧬 Biotechnology
Introducing Heritable Agriculture
Heritable Agriculture is the latest company to be spun off from Google X—Google’s moonshot factory. The agritech company applies machine learning to revolutionise farming by optimising plant breeding for higher yields, lower water consumption, and improved carbon storage. The startup has tested its AI-driven plant breeding models in growth chambers and field trials across California, Nebraska, and Wisconsin. Heritable Agriculture secured funding from FTW Ventures, Mythos Ventures, SVG Ventures, and Google, and it is ready to commercialise its technology.
Dog treat made from lab-grown meat on sale in UK as retailer claims a ‘world first’
A new dog treat called Chick Bites, made with cultivated meat by Meatly, has gone on sale at Pets at Home in the UK. Chick Bites consist of plant-based ingredients combined with lab-grown chicken meat. The meat is said to be nutritionally equivalent to traditional chicken breast, containing essential amino acids, fatty acids, vitamins, and minerals. The cultivated chicken is produced from a single sample of cells taken from a chicken egg, enabling endless production without raising or slaughtering animals.
New glowing molecule, invented by AI, would have taken 500 million years to evolve in nature, scientists say
ESM3, an AI model developed by EvolutionaryScale and designed to generate novel proteins, has created a new protein that researchers say would have taken 500 million years to evolve in nature. This protein, named esmGFP, is a green fluorescent protein similar to those found in jellyfish and corals. However, its sequence is only 58% similar to the closest known fluorescent protein, requiring 96 genetic mutations that would have taken 500 million years to evolve naturally, according to a preprint study published last year in which the protein was first described. Independent scientists have peer-reviewed the findings, which were recently published in the journal Science. ESM3's capabilities could accelerate a wide range of applications in protein engineering, including the design of new drugs.
Stem cells used to partially repair damaged hearts
Researchers in Germany have developed a new method of using stem cells to repair damaged heart muscles. Instead of injecting loose cells, they created a cell patch composed of cardiomyocytes (a form of specialized muscle cells) and supportive stroma cells, which was attached to the heart’s exterior. After first reporting positive results in experiments on mice, the team progressed to trials on primates and a single human heart. In both cases, researchers observed promising results; however, further research is needed before the method can be used in humans.
💡Tangents
▶️ The Road To Super Chips (15:50)
This video explains the concept of super chips, the next step in chip design. A super chip is a collection of smaller chips assembled to form a larger, more powerful chip. Driven by high demand in AI and high-performance computing, super chips promise to unlock a new level of computing performance by reducing data movement and integrating compute and memory more tightly. However, super chips require more power, which also presents new challenges in cooling them.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"