Ahead of AI Safety Summit - Weekly Roundup - Issue #437
This week - one humanoid robot takes its first steps while another is being tested at Amazon; biotech CEO takes her own medicine; is life extension ethical; and more!
In a little over a week from now, around 100 experts on artificial intelligence and regulating AI will meet at Bletchley Park in the UK to attend the AI Safety Summit. Here is everything you need to know about the summit.
The summit is set to take place on November 1st and 2nd at Bletchley Park, famous for housing codebreakers who deciphered the Enigma code during World War II. Organised by the British government, this event is the first of its kind focusing on AI safety, with a goal to "turbocharge action on the safe and responsible development of frontier AI around the world."
Around 100 invited experts, ranging from politicians, business leaders and academics, will be discussing the misuse and risks associated with narrow AI systems possessing dangerous capabilities, such as those used in bioengineering or cybersecurity, and on frontier AI models (large language models being a prime example). The first AI Safety Summit has five objectives:
a shared understanding of the risks posed by frontier AI and the need for action
a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
appropriate measures which individual organisations should take to increase frontier AI safety
areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
showcase how ensuring the safe development of AI will enable AI to be used for good globally
At the time of writing this, no official guest list is available. All we know so far is that the summit will be limited to 100 politicians, business leaders and academics. We can expect only representatives of leading AI companies, such as Alphabet, Microsoft or OpenAI, to attend the summit. Smaller companies and startups are unlikely to be represented. As Matt Clifford, co-organiser of the summit and a cofounder of startup incubator Entrepreneur First, tweeted: "There are a tonne of trade offs in planning a summit like this. We've chosen to have a very small, very focused summit which aims to get substantive outcomes where every attendee is an active participant."
What is available is the summit’s programme. Day one features a three-track dialogue, focusing on the following topics: Understanding Frontier AI Risks, Improving Frontier AI Safety and AI for good – AI for the next generation (which will be focusing on using AI in education). On the next day, the UK Prime Minister will meet a small group of governments, companies, and experts to further discuss mitigating emerging AI risks and harnessing its potential for good. At the same time, the UK Technology Secretary will meet with international peers to agree on the next course of action.
AI Safety Summit is one of many initiatives this year focusing on bringing AI experts and policymakers to discuss the safety of advanced AI models. Similar gatherings were organised before by the White House in the US (I wrote about them in Issue #425 and Issue #432). AI Safety Summit will be the first one to bring together global leaders.
The summit will continue a global discussion about how to regulate the booming AI industry. So far, only the European Union with their AI Act and China have AI-specific regulations in place or in the pipeline. The US regulators are still discussing which approach to take and it is unlikely for any regulations to be passed this year.
Meanwhile, the UK government has chosen to take a “pro-innovation” approach to regulating AI. It means the UK is not going to introduce any new rules on all AI technologies. Instead, different regulators – such as the UK data watchdog and the communications regulator, Ofcom - are expected to deal with regulating the use of AI, based on a “pro-innovation framework”. This framework, built on promoting safety, transparency, fairness, accountability, and the potential for newcomers to challenge established AI entities, aims "to bring clarity and coherence to the AI regulatory landscape" without hampering UK innovation. Not everyone in the UK supports this approach, urging the government to introduce new legislation for AI or face the risk of falling behind the UK and the US in this matter. Meanwhile, 60% of people in the UK would like to see the UK government regulate the use of AI in the workplace to ensure job security.
The keynote speeches from the AI Safety Summit will be livestreamed. It seems the link has not been published yet. I will share the link in the next week’s issue if it has been published by then. If not, I’ll share it on Twitter/X and on LinkedIn. I’ll also publish a summary of the summit so please subscribe to not miss it.
If you enjoy this post, please click the ❤️ button or share it.
From Humanity Redefined
This was an interesting week for the newsletter. We kicked off with the introduction of the new name and the new identity - Humanity Redefined - followed by the biggest growth in over eight years. To everyone who joined us this week - welcome, and I hope you enjoy my work. To everyone else - thank you for being here.
On Tuesday, AI Supremacy published an article titled Who Will Mass Produce Humanoid General Purpose Robots First? that I co-wrote with Micheal Spencer, the writer at AI Supremacy. Check it out and while you are there, subscribe to AI Supremacy.
And on Wednesday, we had a guest article from Isar Bhattacharjee who explores how good are we at using AI assistants and what can be done to use them effectively.
Becoming a paid subscriber is the best way to support the newsletter.
If you enjoy and find value in my writing, please hit the like button and share your thoughts in the comments. Share the newsletter with someone who will enjoy it, too.
You can also buy me a coffee if you enjoy my work.
🦾 More than a human
This biotech CEO decided to take her own (fertility) medicine
Dina Radenkovic, CEO of biotech startup Gameto, took the phrase “you should use your own product” seriously and participated in her company's study on a new, streamlined in-vitro fertilization (IVF) method. Instead of the traditional two-week hormone injection regimen costing around $6,000, Gameto's method requires fewer injections and matures human eggs in a lab using lab-made ovary cells. While currently less effective than traditional IVF, the simplified procedure could appeal to women looking to freeze their eggs. The technology uses stem cells to produce granulosa cells essential for egg maturation, with the goal of helping women balance career and family.
Superficial Brain Implant Could Have a Deep Impact
A new brain implant developed by Motif Neuroscience promises to merge the effectiveness of deep brain stimulation with the convenience of transcranial magnetic stimulation. This pea-sized device could be implanted below the skull but above the dura mater - a protective membrane that surrounds the brain - and it works wirelessly, without any batteries onboard. Using magnetoelectric materials, researchers showed the implant can turn magnetic fields into electrical power. In a proof-of-concept experiment with a human volunteer (who was already undergoing brain surgery), the team demonstrated that the device works correctly and can stimulate the brain through the dura mater.
A Bionic Hand Melds With Woman's Own Bone, Nervous System
An international team of researchers has developed a prosthetic hand that integrates directly with the patient's own nerves, bones, and muscles. In their paper, the researchers report that after a year of daily use at home, the patient gained better control over the prosthesis and that their solution is suitable for long-term daily use. Additionally, the patient's quality of life has improved: phantom limb pain decreased from 5 to 3 on a 10-point pain scale, and the stump pain has completely disappeared.
▶️ Is Life Extension Ethical? (46:49)
As life extension technology becomes more possible every year, some people start to ask if we should use technology to live longer lives. In this video, Isaac Arthur (which I highly recommend subscribing to) addresses common arguments against life extension technology. These range from concerns about it being unnatural, to social issues like unequal access, potential stagnation, the end of traditional relationships, and the broader morality of life extension. Eventually, Isaac comes to the conclusion that life extension is ethical and that we should pursue it.
🧠 Artificial Intelligence
AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First
A 21-year-old computer science student won the first stage of The Vesuvius Challenge, whose goal is to read scrolls from Herculaneum, a Roman city destroyed by Mount Vesuvius in October AD 79. The machine learning algorithm was able to read more than 10 characters in a 4-square-centimetre area of fragile papyrus. The competition deadline is December 31st, 2023, and the Grand Prize of $700,000 will go to the first team to read four passages of text from the inside of the two intact scrolls.
How Generative AI Helped Me Imagine a Better Robot
In this article, Didem Gürdür Broo shares her experiences with using generative AI as a tool in the design process. For this experiment, she used AI image generators to help design a robot. Although the results of this experiment were mixed and her team did not use any of the generated designs, she highlights how these tools boosted her imagination, made her think differently, and allowed her to see connections more clearly. “These tools can shift our mindsets and move us out of our comfort zones—it’s a way of creating a little bit of chaos before the rigors of engineering design impose order,” she writes.
🤖 Robotics
▶️ Figure 01 humanoid robot takes its first steps (0:38)
This week, Figure released the first video showcasing their humanoid robot walking, a result of 12 months of work. Figure is a relatively new player in the humanoid robot scene. Founded in 2022, the company promises their robot, Figure 01, to be “the world’s first commercially viable autonomous humanoid robot”. To learn more about Figure and other companies pioneering in the commercial humanoid robot sector, check out my article Ten Companies Leading the Upcoming Humanoid Robot Wave.
Amazon begins testing Agility’s Digit robot for warehouse work
At the "Delivering the Future" event, Amazon announced that it would begin testing Agility Robotics' bipedal robot, Digit, at its facilities. Agility Robotics is among the companies included in Amazon’s $1 billion "Industrial Innovation" fund and their robot, Digit, is the only one of the upcoming humanoid robots commercially available. Recently, Agility Robotics opened a new factory, capable of producing 10,000 robots per year.
‘Social loafing’ found when working alongside robots
Researchers at the Technical University of Berlin have conducted research to see how people react to working with a robot. Their study has found that people come to see robots as part of their team. They also found that if a robot performs particularly well, people tend to take a laid-back approach to the work. Once the robot has proven to be proficient, people started to pay less attention to the quality of the work done by the robot, similar to what we do when we work with a respected colleague.
Marines Test Fire Robot Dog Armed With Rocket Launcher
The U.S. Marine Corps recently tested a robot dog (the Marines preferred to refer to the robot as “robot goat”) equipped with a training version of the M72 infantry anti-armour rocket launcher. There aren't many details available about the tests. Interestingly, it appears the Marines used an off-the-shelf Chinese Unitree Go1 robot for the tests. The Marine Corps isn't the only military branch exploring the potential of quadruped robots on the battlefield; both Russian and Chinese militaries have conducted similar tests in the past.
🧬 Biotechnology
Anti-ageing molecule boosts fertility in ageing mice
Researchers found that spermidine, a compound present in many cells and linked to lifespan extension in various organisms, boosted fertility in older mice. Supplemented mice showed improved egg quality and reduced cellular ageing, likely due to spermidine aiding cells in eliminating damaged components. Further studies are needed to assess its potential in human fertility treatments.
Thanks for reading. If you enjoy this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
You can follow Humanity Redefined on Twitter and on LinkedIn.
The trick is going to be regulating AI without suffocating the technology and its benefits out of existence, like we did with nuclear energy. This is a tough balancing act.