DeepSeek R1 changes the game - Sync #504
Plus: o3-mini; humanoid robots dance alongside human performers; OpenAI's new funding round; Waymo expands tests to 10 new cities; Musk promises Tesla robotaxi service in Austin by June; and more!
Hello and welcome to Sync #504!
What a week this has been… Of course, we will recap everything that has happened with DeepSeek so far, the drama surrounding it, and we will put all of that into context. But that’s not the only story of the week. In fact, this has been the most packed week in tech since ChatGPT and GPT-4 were released almost two years ago.
OpenAI, forced by DeepSeek and R1, released o3-mini models, the smallest model from the o3 family of reasoning models. Elsewhere in AI, OpenAI is seeking another funding round that, if successful, would catapult the company’s valuation to up to $300 billion. We have also seen rumours of Gemini 2.0 Pro and Grok 3.
Over in robotics, a group of Unitree H1 humanoid robots joined human performers on stage to celebrate the Lunar New Year while Agility Robotics promises to have a fully cooperative, safety-rated robot within 24 months. In the world of self-driving cars, Waymo is planning to expand its trials to 10 more US cities, and Elon Musk promises to launch Tesla’s robotaxi service in Austin in June.
Apart from that, Chinese scientists have successfully created mice with DNA from two fathers using CRISPR gene editing, the Boom XB-1 has become the first privately developed jet to break the sound barrier, and a Florida-based company plans to launch the first commercial Moon-based data centre in a couple of weeks.
Also, with over 5,000 words, this is probably the longest issue of Sync—that’s how crazy this week was. I hope you’ll enjoy it!
DeepSeek R1 changes the game
Not every day does a new AI model enter the scene and change the rules of the game. GPT-4 was one of those models. When it was released almost two years ago, in March 2023, it became the main topic of conversation not only in the tech world but also in mainstream media. GPT-4, together with ChatGPT—released only three months earlier—completely reshaped how people perceive AI around the world and kick-started the AI boom (or bubble) we see today.
Something on a similar scale might have happened in the past week. This new game-changing model, however, did not come from OpenAI, Google, Anthropic, or any other Western Big Tech company or ambitious startup. It came from China, from an unknown hedge fund company that developed it as a kind of side project. Its name is DeepSeek R1; it offers a performance level close to OpenAI’s o1 for a fraction of the cost, it’s open, and it has changed the game.
In this post, we will take a closer look at what DeepSeek R1 brings to the table, its impact on the tech and AI industry, and the new possibilities it opens up. I will not be diving deep into the details of how R1 works or its technical breakthroughs—that requires a separate post, which I am working on, so stay tuned for that.
Let’s now explore DeepSeek R1 and why it is such a big deal.
The performance of o1 for a fraction of the cost
DeepSeek is an interesting AI lab. It started as an offshoot of Chinese quantitative hedge fund High-Flyer Capital Management managing some $8 billion, making it one of China’s largest quantitative funds. High-Flyer Capital was founded in 2015 by Liang Wenfeng, a math geek who caught the investing bug, as the Wall Street Journal described him, together with two college friends.
Prior to the release of R1, DeepSeek was a relatively unknown AI lab from China but they were gaining recognition in the AI scene, most notably in December 2024, when the lab released DeepSeek-V3, a 671-billion-parameter mixture-of-experts model that outperformed Llama 3 and other available open-weight models.
But what put DeepSeek on everyone’s radar was R1, the company’s first reasoning model. Similar to the o-series models from OpenAI, R1 does not answer the prompt immediately. Instead, it takes some time to “think” about how to answer the prompt by asking itself how to solve the prompt and double-checking if its chain of thoughts is correct at each step. In both cases, the result is an AI model that can solve complex queries that other models, like GPT-4o or Llama, were unable to answer correctly.
What makes DeepSeek R1 special are three things—it being open, efficient, and showing a world-class level of performance.
First, R1 is an open-weight model, meaning anyone can download it and start using it without paying anything to DeepSeek (more on that later).
Second, R1 is an efficient model. Unlike OpenAI, Google, Microsoft and other Western AI companies, DeepSeek does not have access to the latest Nvidia GPUs and had to use a generation or two older models. As SemiAnalysis estimates, DeepSeek has access to 50,000 Hopper GPUs, Nvidia’s previous generation of GPUs. For comparison, xAI built a cluster consisting of 100,000 Hopper GPUs while Meta, Google or Microsoft have access to way more than 100,000 state-of-the-art Nvidia GPUs.
To compensate for the disadvantage in terms of computing power, the team at DeepSeek employed clever tricks to make DeepSeek-V3, the base model for R1, as efficient and strong as possible. Then, to build R1, the team used another set of clever tricks to make R1 reason and find the right answer efficiently.
The result is a reasoning model that matches the performance of OpenAI’s o1 model while requiring only a fraction of the cost.

The ARC-AGI-1 benchmark provides a good illustration of how much more cost-effective R1 is compared to o1. In that benchmark, although R1 falls behind o1 on low-computing settings, it is significantly cheaper to run. R1 scores 15.8% in the benchmark compared to the 20.5% scored by o1, but it achieves this with seven times greater efficiency in terms of cost.

Those results are impressive. Other competitors would sooner or later reach the o1 level of performance, but I don’t think many expected it to happen so quickly or that the model closing the gap with OpenAI would be coming from China, being more efficient and open. That’s why the existence of R1 is such a big deal.
Red day for the US tech industry
The arrival of DeepSeek R1 sent shockwaves across the tech industry and the US stock market.
The most affected company was Nvidia which up to this point rode high the AI wave, making billions upon billions and becoming a $3.5 trillion company. However, the news that DeepSeek R1 was able to achieve a performance comparable to o1 while using far less computing power caused Nvidia to lose almost 17% of its stock value, or $600 billion, the biggest one-day market value loss in history.
For Nvidia, the loss was more than double the $279 billion drop the company saw in September 2024, which was the biggest one-day market value loss in history at the time, surpassing Meta’s $232 billion loss in 2022 which was up to that time the biggest one-day market value loss. Before that, the steepest drop was $182 billion by Apple in 2020.
Nvidia was not the only tech company affected. Companies in the data center sector like Dell, Oracle and Super Micro Computer have seen their stock go down by at least 8.7%.
Although this week was not a good one for Nvidia and other companies in the high-performance computing and data centre sectors, this may be just a temporary hiccup. Some analysts argue that the release of DeepSeek R1 lowers the barrier to entry for smaller companies and organisations that would otherwise be unable to use cutting-edge AI reasoning models, enabling further scaling and proliferation of AI. This would suggest that the demand for computing power will not be decreasing.
The leaders of Microsoft and Meta express similar opinions and defend their multibillion-dollar investments in AI research and development. "Investing very heavily in capital expenditure and infrastructure is going to be a strategic advantage over time," Meta CEO Mark Zuckerberg said on a post-earnings call. He also said that the goal is to make Llama 4, the company’s next flagship model, the world’s most competitive AI model. Meanwhile, Satya Nadella, CEO of Microsoft said on a call with analysts that "as AI becomes more efficient and accessible, we will see exponentially more demand."
Red flags
Although DeepSeek R1 represents a massive shift of dynamics in the AI ecosystem, it does not come without controversies. The main concern some have with R1 is where it comes from—China. Since DeepSeek is a Chinese company, it has to obey Chinese laws regulating the development and use of AI models. That means DeepSeek R1 is subject to Chinese censorship laws. You can easily find either yourself or by searching social media the results of this law by asking R1 about what happened in June 1989 or what is Taiwan’s status. Someone even created a dataset of 1,156 questions censored by DeepSeek if you are interested in what prompts DeepSeek can answer and what it cannot.
Another issue people raise, also connected to R1 being a Chinese model, is the data risks. If you are using the online version of DeepSeek, then all user data is being stored on servers in China where local intelligence agencies may request access to it. The company clearly says that in its privacy policy. Because of that, as TechCrunch reports, hundreds of companies and organisations, including the Pentagon and the US Navy, have blocked access to DeepSeek.
There are also questions about the safety aspect of R1. According to a report published by security researchers from Cisco and the University of Pennsylvania, DeepSeek R1 failed to block any of the 50 harmful prompts used in the test. Furthermore, researchers note that DeepSeek R1 lacks robust guardrails, making it highly susceptible to algorithmic jailbreaking and potential misuse.
DeepSeek forces OpenAI to release o3-mini
Nvidia was not the only company that had a bad week thanks to DeepSeek—it was also a bad week for OpenAI.
The release of DeepSeek R1 raised questions about OpenAI’s approach to research and deployment of frontier models. R1 was released just days after OpenAI announced the Stargate Project, a $500 billion project to massively scale up US AI infrastructure.
As Sam Altman noted in a Reddit AMA, DeepSeek has lessened OpenAI’s lead in AI. He also said that OpenAI has been “on the wrong side of history” when it comes to open-source. Kevin Weil, OpenAI’s chief product officer, then added that the company is considering opening models that are not state-of-the-art anymore but did not elaborate more on that idea.
Those are, however, future plans. What OpenAI did in response to DeepSeek R1 was to release o3-mini models in three sizes—low, mid and high. There were rumours that OpenAI was finalising the release of o3-mini but R1 may have accelerated OpenAI’s timelines.
o3-mini is available to all ChatGPT subscribers, including free subscribers but with rate limits. ChatGPT Plus allows to access o3-mini with higher usage limits compared to free users while ChatGPT Pro users, who pay $200 per month, can enjoy unlimited access to both o3-mini and o3-mini-high.
In addition to ChatGPT access, o3-mini is available through OpenAI's API services, including the Chat Completions API, Assistants API, and Batch API. Developers can choose between three reasoning effort options—low, medium, and high—to optimize for specific use cases.
Geopolitical implications of R1
In today’s world, the tech industry has become so influential that it can be leveraged on the geopolitical stage. For a long time, the US has been regarded as the leader in AI research and development. The arrival of R1, a cutting-edge Chinese open model, has challenged the dominance of US companies. US AI firms now face a serious competitor, and both they and the country as a whole will need to respond. Venture capital investor Marc Andreessen called the new Chinese model “AI’s Sputnik moment”, while President Trump called R1 a “wake-up call” for US tech companies.
A question raised in conversations about DeepSeek is have the US sanctions on chip exports failed. One might look at what the team at DeepSeek has achieved as an example of innovation born from necessity. DeepSeek did not have access to the latest and greatest GPUs and had to find another way to improve the performance of its AI model.
Others, like Anthropic CEO Dario Amodei, advocate for stricter and better-enforced export controls on AI chips to China. Amodei in his post argues that export controls are crucial for maintaining US and allied leadership in AI by preventing China from acquiring millions of advanced chips, which could enable it to achieve military and technological dominance. According to Amodei, DeepSeek’s advancements do not indicate that export controls failed but rather highlight the need for stricter enforcement to close loopholes and limit large-scale chip acquisition.
DeepSeek R1 could also reshape the global AI ecosystem. Until the release of R1, the US was leading the AI race, with China not far behind. The UK, Europe, and other countries were largely insignificant in comparison. R1 now presents an opportunity for change. China has come much closer to US companies, and other countries can too—if they play their cards wisely.
The new wave of AI
What is happening right now in the tech and open-source communities reminds me of the time when the first Llama model leaked to the public in early March 2023. The situation back then was somewhat similar—OpenAI had just released ChatGPT and GPT-4, and both seemed so far ahead of its competitors, let alone open models, that closing the gap appeared to be a distant prospect. Then, the first Llama model leaked, and the open community exploded with excitement. Tools like llama.cpp or Ollama were subsequently created to make working with open models locally easier, giving more people an opportunity to start experimenting with large language models without paying monthly subscriptions.
Now I see something similar happening with DeepSeek R1. I’ve seen people running R1 locally on their laptops and even on Raspberry Pi. Thanks to the detailed paper explaining how R1 works, researchers from smaller organisations or even individuals can try to replicate the model and find new methods to improve the model.
Microsoft and GitHub made DeepSeek R1 available in their model catalogs. DeepSeek R1 is also available on Hugging Face as well as on platforms like Ollama, which makes it running locally as easy as typing one command. Meanwhile, HuggingFace announced Open-R1, a project whose aim is to fully replicate DeepSeek R1, including training data and source code, and release it on an open-source licence so that the whole research and industry community can build similar or better models on top of Open-R1.
DeepSeek R1 is a milestone in AI development. It’s not every day that a new AI model shocks the global tech industry, breaks into the mainstream news cycle, causes ripples on the geopolitical stage, and excites the AI community. R1 has demonstrated that there are more effective ways to improve AI performance than simply throwing more computational power at the problem. By making R1 openly available, DeepSeek is levelling the playing field, giving everyone the opportunity to experiment with a state-of-the-art reasoning model for free or use it to build something amazing.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
Retro Biosciences, backed by Sam Altman, is raising $1 billion to extend human lifespan
Retro Biosciences, a biotech startup backed by Sam Altman and focused on extending the human lifespan, is raising a $1 billion Series A round after receiving a $180 million seed investment from Altman. The company is developing drugs for diseases like Alzheimer’s and has partnered with OpenAI to use AI for stem cell research. CEO Joe Betts-LaCroix aims to bring a new drug to market within the 2020s. Retro Biosciences joins other billionaire-backed longevity ventures, including Altos Labs (supported by Jeff Bezos) and Unity Biotechnology (backed by Bezos and Peter Thiel).
CRISPR Baby 2.0? Controversial Simulation Touts Benefits of Gene Editing Embryos
Genetically modifying humans or creating “designer babies” is a controversial topic. If done properly, germline editing could reduce inherited diseases across generations. However, it raises ethical dilemmas: at what point does preventing disease cross into "designer baby" territory? Many scientists argue that the approach is unsafe and unproven due to the complexity of genetic interactions. However, a recent study using simulations instead of real embryos to evaluate the effects of germline editing found that adding just 10 protective gene variants could significantly reduce the risk of polygenic diseases such as heart disease, stroke, cancer, depression, and diabetes. Critics point out that the simulation model assumes perfect accuracy in gene editing—something not yet achievable—as well as a complete understanding of the genetic basis of diseases, which remains an ongoing challenge.
🧠 Artificial Intelligence
OpenAI in Talks for Huge Investment Round Valuing It at Up to $300 Billion
According to The Wall Street Journal, OpenAI is in early talks to raise up to $40 billion in a new funding round. SoftBank, which recently joined OpenAI, Oracle, and MGX as one of the leaders of the Stargate Project, is expected to lead the round, potentially investing between $15 billion and $25 billion, while the remaining funds would come from other investors. Initial discussions considered a valuation of $340 billion, but later negotiations lowered it to $300 billion, which would make OpenAI the second-most valuable startup globally, behind SpaceX. If this funding round takes place, it would be the largest in Silicon Valley history. Last October, OpenAI raised $6.6 billion, bringing its valuation to $157 billion, with SoftBank contributing $500 million in that round.
Google quietly announces its next flagship AI model
While the world has been talking about DeepSeek and OpenAI released the o3 model to the public, Google has accidentally revealed the existence of the Gemini 2.0 Pro Experimental model, a successor to the Gemini 1.5 Pro model. The company released the Gemini 2.0 Flash model in December as part of the Gemini 2.0 family. No date has been announced yet for the public release of Gemini 2.0 Pro.
Introducing ChatGPT Gov
OpenAI has announced ChatGPT Gov—a streamlined version of ChatGPT tailored for US government agencies, providing them access to OpenAI’s frontier models to improve efficiency and productivity. ChatGPT Gov includes many of the same features and capabilities as ChatGPT Enterprise and can be self-hosted on Microsoft Azure’s commercial or government cloud offerings. In the announcement, OpenAI highlighted that since 2024, over 90,000 users across more than 3,500 US government agencies have exchanged over 18 million messages using ChatGPT. It also cited the Air Force Research Laboratory, Los Alamos National Laboratory, and two state agencies as examples of existing governmental users.
Europe accelerates AI drug discovery as DeepMind spinoff targets trials this year
Demis Hassabis announced at the World Economic Forum in Davos that Isomorphic Labs, a Google DeepMind spinoff, expects its AI-designed drugs to enter clinical trials by the end of the year. Isomorphic Labs is one of over 460 AI startups working on drug discovery, with more than a quarter based in Europe. The company has signed a $45 million research deal with Eli Lilly, with potential milestone payments of up to $1.7 billion. It also collaborates with the Swiss biotech firm Novartis.
Grok 3 seemingly went live for some users
Some users on X briefly gained access to Grok 3, xAI’s latest model, before their access was revoked, suggesting the company is nearing the release of its next flagship AI model. According to those who had the opportunity to interact with Grok 3, it successfully handled logical reasoning and coding-related tasks and was able to answer riddles that other AI models struggled with. However, it was still making errors. Elon Musk claims Grok 3 used “10x” more compute than its predecessor, Grok 2. There is also a possibility that Grok 3 may feature an “Unhinged Mode,” allowing for more controversial or offensive responses.
Humanity’s Last Exam Benchmark
Humanity’s Last Exam is a benchmark developed by Scale AI and the Center for AI Safety (CAIS) to test the limits of AI knowledge at the frontiers of human expertise across multiple disciplines, including mathematics, humanities, and natural sciences. Recently, the team behind the benchmark released the results of this benchmark and it is indeed tough for AI, with GPT-4o scoring only 3.3% and the best one, o3-mini (high), scoring 13%. However, given the rapid pace of AI development, researchers admit that it is plausible that models could exceed 50% accuracy by the end of 2025.
AI Will Write Complex Laws
When it comes to applying AI to the practice of law, the most popular idea is to create an AI that can explain the law. However, this article explores the other side of the coin—using AI to write more complex legislation. The article argues that AI’s broad expertise and reasoning capabilities enable it to draft laws on multiple topics with high specificity. It also suggests that AI-augmented lawmaking is inevitable, driven by the increasing complexity of governance and policymaking demands.
Since everyone is talking about DeepSeek R1, Yannic Kilcher decided to revisit an almost year-old paper from a team of researchers at DeepSeek and two top Chinese universities, which describes DeepSeekMath—a 7-billion-parameter model that outperformed much larger models on math benchmarks. This paper also introduced Group Relative Policy Optimization (GRPO), a reinforcement learning technique that is at the core of R1. Kilcher explains and compares GRPO to previous techniques, highlighting its significance. He also emphasises one of the paper's key conclusions: current LLMs already possess the ability to solve complex problems—it is just a matter of how to surface the correct responses. Kilcher argues that reinforcement learning can help improve AI performance but may also quickly push existing models to their limits, highlighting the need for developing better base models.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
▶️ Unitree H1: Humanoid Robot Makes Its Debut at the Spring Festival Gala (1:18)
For this year’s Spring Festival Gala, the world's most-watched TV programme, with hundreds of millions of viewers in China and globally, a group of Unitree H1 humanoid robots joined human dancers and performers to celebrate the Lunar New Year.
1X Acquires Kind Humanoid
1X Technologies, a startup developing humanoid robots, has acquired Kind Humanoid, another startup in the same field. Founded in 2023 by Christoph Kohstall, a former Stanford scientist and Google robotics team member, Kind Humanoid focused on building humanoid robots for use in homes. Their only robot was Mona, a bipedal humanoid designed for home and healthcare applications. 1X is a larger company, founded in 2014, that has raised over $130 million and recently unveiled NEO Beta, its humanoid robots for use in homes and near humans. The details of the acquisition were not disclosed to the public.
Waymo to test in 10 new cities in 2025, starting with Las Vegas and San Diego
The Verge reports that Waymo, Alphabet’s self-driving company, plans to send its autonomous vehicles to 10 new cities in 2025, beginning with Las Vegas and San Diego. These vehicles will be manually driven and used for testing, with no guarantee of a commercial robotaxi deployment in those 10 new cities. The company aims to evaluate how well its self-driving system adapts to new environments with different weather conditions and driving habits. Testing focuses on "generalisability," ensuring the system can operate in new cities with minimal pre-launch testing.
Elon Musk claims Tesla will launch a self-driving service in Austin in June
Elon Musk announced that Tesla will launch a paid ride-hailing robotaxi service in Austin, Texas, in June 2024 using its own fleet. The service will operate with Tesla’s Full Self-Driving (FSD) software in an "unsupervised" mode, which has yet to be released, although Musk expects it to become available to owners in California and “many regions of the US” this year. Musk provided few details about the rollout but stated that Tesla owners adding their own cars to the ride-hailing fleet will not happen until at least 2025.
▶️ Breaking Out of the Cage: Agility Robotics' Road to A Safety Rated Humanoid (3:10)
In this video, Agility Robotics outlines its plans to improve the safety of its humanoid robots so that they can one day leave workcells and operate alongside humans. It details how it intends to transition its robots from workcell environments to cooperative and, eventually, collaborative settings. Agility promises to have a fully cooperative, safety-rated robot within 24 months, meaning it will be able to work without a safety cage but still be separated from human workers.
▶️ Self-Driving from the factory to the loading dock | Tesla (0:54)
In this promotional video, Tesla shows how its cars autonomously leave the factory and drive themselves to their designated loading dock lanes without human intervention.
DIY Drones Deliver The Goods With Printed Release
Delivery drones are no longer a domain of BigTech or well-funded startups. With this project, which promises to turn any drone into a delivery drone, the open-source community joins the game.
MIT Unveils New Robot Insect, Paving the Way Toward the Rise of Robotic Pollinators
Researchers at MIT have created a robotic insect designed with artificial pollination in mind. With its lifelike flapping wings, this tiny robot, weighing under a gram, can hover for nearly 17 minutes and can perform complex flight manoeuvres. Future plans include increasing flight duration tenfold, enhancing autonomy with onboard batteries and sensors, and refining control mechanisms to match the precision of real bees. The ultimate goal is to deploy swarms of these robots in indoor farms, where they could assist in crop pollination.
China to host world's first half-marathon race between humans and robots
In April, China will host the world’s first half-marathon in which human athletes will run alongside humanoid robots. Around 12,000 humans will race against robots from more than 20 companies in the Beijing Economic-Technological Development Area—also known as E-Town, in the capital's Daxing district. To qualify for the race, robots must resemble humans (no wheeled robots allowed) and have a mechanical structure that enables them to walk or run. The robots can be either remotely operated or autonomous.
🧬 Biotechnology
Mice with two dads have been created using CRISPR
Chinese scientists have successfully created mice with DNA from two fathers using CRISPR gene editing. The research involved knocking out 20 imprinted genes crucial for embryonic development to bypass the need for maternal DNA. These genetically engineered mice grew larger than normal, had enlarged organs, were infertile, and had shorter lifespans. Out of 164 gene-edited embryos, only seven live pups were born. While the study marks a significant scientific breakthrough, applying the same methods to humans remains unrealistic due to ethical concerns and technical limitations. However, researchers aim to explore similar approaches in primates to better understand genetic imprinting and reproductive biology.
Dutch pioneer files EU’s second lab-grown meat application
Mosa Meat, the Dutch food tech company and a pioneer in lab-grown meat space has submitted the European Union’s second application for cultivated meat, specifically a cell-based beef fat. The EU review includes a nine-month risk assessment by the European Food Safety Authority, followed by a seven-month risk management process. Approval requires a qualified majority vote from EU countries. Some EU nations, such as Italy and Hungary, have attempted to ban cultivated meat due to ideological and agricultural protection concerns. Despite challenges, CEO Maarten Bosch remains pragmatic and committed to navigating the regulatory process.
‘Miracle’ drug innovation could see a new Wegovy launch every couple of years, Larry Summers says
Speaking on a World Economic Forum panel, former US Treasury Secretary Larry Summers predicts that innovations like Wegovy and Zepbound could emerge every few years due to rapid technological advancements. Summers highlighted that the world is experiencing a period of “stunning technological possibility” with major developments in green energy, computing, and life sciences.
AI Accelerates Enzyme Engineering
A team of bioengineers and synthetic biologists created a new platform using machine learning to design and predict enzyme behaviour efficiently. The new system combines three key steps—building DNA without using living cells, making proteins from that DNA, and testing how well the proteins work. This helps scientists quickly explore and improve enzyme designs. By predicting enzyme structures and functions computationally, thousands of potential variants can be assessed without physical synthesis. The team reported that the platform improved the synthesis of a small-molecule pharmaceutical from 10% to 90% yield.
💡Tangents
▶️ XB-1 First Supersonic Flight (2:00:26)
After a successful first supersonic flight, Boom XB-1 became America's first civil (non-military and non-governmental) supersonic jet. This historic flight was part of a series of test flights evaluating various systems and innovations used in the XB-1. It also lays the groundwork for Overture—Boom’s supersonic airliner, which is expected to carry 64–80 passengers at Mach 1.7, approximately twice the speed of today’s subsonic airliners. With XB-1 and eventually Overture, Boom hopes to revive commercial supersonic travel. If you want to see the exact moment when XB-1 broke the sound barrier, it happened at 1:01:25, followed by two more supersonic runs in the next 10 minutes. Additionally, Scott Manley has made a video about the flight, providing extra context and details, including how Boom used Starlink and an iPhone to livestream the event from a chaser plane.
First-ever data center on the Moon set to launch next month
Florida-based startup Lonestar Data Holdings plans to launch the first commercial Moon-based data centre, the "Freedom Data Center," in February aboard a SpaceX Falcon 9 rocket. Lonestar claims that storing data on the Moon offers unique benefits, including unparalleled physical security and protection from natural disasters, cyber threats, and geopolitical conflicts because, well, the servers are literally on the Moon. Initial customers for its lunar platform include the state of Florida, the Isle of Man government, AI firm Valkyrie, and the band Imagine Dragons. Previously, Lonestar successfully tested the world's first software-defined data centres from the International Space Station in 2021 and 2022. In February 2024, the company successfully tested its first data centre from the Moon and in cislunar space.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
Great roundup! So. Much. Is. Going. On.