California's controversial AI bill - Sync #481
Plus: Neuralink shares a progress update on the second clinical trial; Windows Recall is coming back in October; a $16,000 humanoid robot from Unitree; a lot about drones; and more!
Hello and welcome to what was previously known as “Weekly News Roundup,” but will now be called Sync.
The previous name was too long and too generic, so I spent some time thinking about a new name that draws from the language of technology and better reflects the purpose of these articles—to bring together all the essential news and stories from AI, robotics, biotech, and the bleeding edge of technology that promises to make us more than human. I believe the new name, Sync, is that name.
Please let me know what you think about the new name.
And now, let’s dive into Sync #481!
The main story this week is California's controversial AI bill, SB 1047, how it was declawed by Silicon Valley, and what this reveals about tech companies.
In other news, Neuralink shared a progress update on their second clinical trial.
In the AI space, Microsoft is not giving up on Windows Recall yet, Eric Schmidt speaks the quiet part aloud, and Nvidia’s AI NPCs will debut in a game next year.
There’s been a lot of activity in robotics this week, with Unitree announcing it's ready to mass-produce its cheaper, $16,000+ humanoid robot. Meanwhile, Boston Dynamics shows off how good the new all-electric Atlas is at doing push-ups, and a company with ties to the legendary IHMC reveals their own humanoid robot. We also have a mini-section about drones.
We’ll wrap up this week’s issue with a look at a team turning plastic-eating bacteria into food and how to win a bike race by hacking opponents’ gear shifters.
Enjoy!
California's controversial AI bill
The global race to regulate AI is intensifying. The EU has the AI Act, which is now in force, and the Chinese government is also regulating the use of AI. Meanwhile, the US does not yet have its own equivalent of the EU’s AI Act at the federal level, but that hasn’t stopped individual states from putting forward their own AI regulations. That is exactly what California is doing.
In February 2024, Senator Scott Wiener introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB 1047. This landmark bill is designed to prevent AI systems from causing catastrophic harm, such as mass casualties or large-scale cyberattacks, by imposing strict safety requirements on the developers of the most advanced AI models.
However, those proposed stringent safety requirements were met with fierce opposition from Silicon Valley and showed what tech companies will do to protect their interests.
The original SB 1047
The original version of SB 1047 introduced a comprehensive framework aimed at preventing catastrophic harm from advanced AI systems. The bill specifically targeted large AI models—models that cost at least $100 million and use 10^26 FLOPS for training (these thresholds set the bar roughly at the level of resources needed to train GPT-4 and could be raised as needed). Developers of these systems were required to implement stringent safety protocols, including an "emergency stop" feature that would allow for the immediate shutdown of an AI model if it posed a significant risk. Additionally, developers had to undergo annual third-party audits to ensure compliance with these safety measures.
To enforce these rules, SB 1047 proposed the creation of the Frontier Model Division (FMD), a new government agency responsible for overseeing AI safety. The FMD, governed by a five-person board with representatives from the AI industry, open-source community, and academia, would set safety guidelines and advise the California attorney general on potential violations. Developers were also required to submit safety certifications under penalty of perjury, providing legal assurance that their AI systems adhered to the bill’s stringent requirements.
Non-compliance with these regulations could result in severe penalties, with fines reaching up to $10 million for the first violation and up to $30 million for subsequent offences. The bill also empowered the attorney general to bring civil actions against developers who failed to comply. To encourage transparency, SB 1047 included whistleblower protections for employees who reported unsafe AI practices to the authorities.
It is also worth noting that the proposed rules would apply to any company doing business in California, regardless of where they are based.
Silicon Valley did not like SB 1047
However, the bill faced strong opposition from Silicon Valley and the broader AI community. Critics argued that the bill’s regulations could stifle innovation, particularly for startups and open-source projects.
In an open letter opposing SB 1047, OpenAI warned that the proposed bill could significantly hinder AI innovation and drive companies out of California. a16z, one of the largest venture capital companies in Silicon Valley, also strongly opposed SB 1047, arguing that it would burden start-ups with its arbitrary and shifting thresholds. Yann LeCun, Meta's Chief AI Scientist, has been vocal against the bill, arguing that it would harm research efforts and could effectively "kill" open-source AI. Chamber of Progress, a tech industry trade group representing companies like Google, Apple, and Amazon, stated that SB 1047 would restrain free speech and push tech innovation out of California. Other opponents of the bill included Fei-Fei Li, a prominent AI researcher and Stanford professor, who criticised the bill for potentially harming California’s AI ecosystem, and Andrew Ng, who argued that the bill makes a fundamental mistake by regulating AI technology instead of specific AI applications, which he believes would be a more effective approach.
The amended SB 1047
The strong opposition from the tech industry and the AI community has led California’s lawmakers to change the original bill.
One of the most notable changes was the reduction of the attorney general's power. Initially, the bill allowed the attorney general to sue AI developers for negligent safety practices before a catastrophic event occurred. The amended bill now limits this power, permitting the attorney general to seek injunctive relief to stop potentially dangerous activities, but lawsuits can only be filed after a harmful event has taken place.
Another major amendment was the removal of the Frontier Model Division. Instead, the bill establishes the Board of Frontier Models within the existing Government Operations Agency, expanding the board from five to nine members. This board will still be responsible for setting thresholds, issuing safety guidelines, and regulating auditors, but within a more integrated governmental structure.
Additionally, the amendments brought leniency to the bill’s safety certification requirements. Developers are no longer required to submit safety certifications under penalty of perjury. Instead, developers will be required to provide public statements about their safety practices. The bill also introduced protections for open-source AI projects, ensuring that smaller developers who spend less than $10 million fine-tuning a model are not held liable, shifting the responsibility back to the original developers.
The weakened rules, however, still do not satisfy everyone. Martin Casado, general partner at a16z, wrote in a tweet that “the edits are window dressing. They don’t address the real issues or criticisms of the bill.” Additionally, eight US Congress members representing California wrote a letter asking Governor Gavin Newsom to veto SB 1047.
What’s next for SB 1047?
Despite the still existing opposition, on 15th August 2024, the amended SB 1047 passed through California’s Appropriations Committee and now is heading to California’s Assembly floor for a final vote. If it passes, the bill will return to the Senate for approval of the latest amendments before potentially being signed into law by Governor Newsom.
If SB 1047 is signed into law, it could significantly impact how AI is regulated in the US. Other states may follow California’s example and enact their own AI laws. Furthermore, any potential federal law to govern AI might borrow ideas from SB 1047.
However, the whole SB 1047 story showcases the tech industry’s reluctance to be regulated. Publicly, these companies claim to be in favour of regulations, but when new regulations are proposed, they do whatever they can to either repel or weaken them, as seen with SB 1047. If anything, SB 1047 is a “mask off” moment for the AI industry, revealing how far tech companies will go to protect their technology and interests from being regulated and held accountable.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
PRIME Study Progress Update — Second Participant
Neuralink shared an update regarding its second clinical trial. According to the statement, the patient received his Link last month, was discharged the following day, and his recovery has been smooth. Since then, the patient has reportedly been using CAD software and playing games using only his thoughts. Neuralink also reports that the retraction of threads, which was observed in the first clinical trial, hasn’t occurred in the second patient.
Designer Babies Are Here — So Why Aren't We Talking About It?
This article discusses the ethical and societal challenges of prenatal genome editing, highlighting the controversy surrounding germline genome editing, where DNA changes can be inherited, and the absence of public debate despite significant developments, such as the birth of genetically modified babies in China in 2018. The article stresses the importance of engaging communities, particularly those affected by genetic diseases, to explore the ethics of genome editing. It also raises concerns about health equity and the trust gap in healthcare, urging the need for continuous ethical oversight and broader societal discussions.
🔮 Future visions
▶️ The Next Technological Revolution (45:02)
Steam power, electricity, computers, the internet—these are some of the technologies that have revolutionised our daily lives. But what technology will be next to join this list? There are many contenders, and in this video, Isaac Arthur takes a closer look at some of them, from 3D printing to biotech, nanotech to advanced robotics, and new energy sources, examining how they could revolutionise our lives in the near future.
🧠 Artificial Intelligence
Microsoft will release controversial Windows Recall AI search feature to testers in October
Do you remember Windows Recall? That controversial proposed feature in Windows that takes screenshots of your screen to feed them into AI so it can then do some actions? It was met with heavy criticism and raised questions about user privacy, but now Microsoft has announced that Windows Recall is coming back in October for members of the Windows Insider Program with Copilot+ PCs. There is no information about when this feature will be widely available.
3x3 AI Video Matchup: US vs China
Ex-Google CEO says successful AI startups can steal IP and hire lawyers to ‘clean up the mess’
Recently, Eric Schmidt, former CEO and chairman of Google, gave a talk at Stanford that made some waves. The talk, which was removed from YouTube (though transcripts are available online), provides insight into how Silicon Valley really thinks and operates, including the true stance on remote work and praising those who get the most of their workers. Regarding AI, Schmidt advises stealing all the content and worrying about legal consequences once the service takes off and can afford “a whole bunch of lawyers to go clean the mess up.”
Brands should avoid AI. It’s turning off customers
A recent study showed participants a range of products, including vacuum cleaners, TVs, consumer services, and health services, with some described as “high tech” and others as using AI. The study found that labelling a product as using AI significantly lowers consumers' intention to buy it. This hesitation is linked to a lack of trust, fueled by the higher expectations for AI to be error-free and concerns over privacy and data transparency.
White House says no need to restrict ‘open-source’ artificial intelligence — at least for now
The White House has expressed support for open-source AI technology, stating in a recent report that there is no immediate need for restrictions on making key AI components widely available. The report is the first from the US government to address the debate between proponents of open and closed AI systems, emphasizing the need for continued monitoring of potential dangers while recognizing the innovative benefits of openness.
Nvidia’s AI NPCs will debut in a multiplayer mech battle game next year
Nvidia ACE, the company’s AI-powered system for giving voices and conversation skills to in-game characters, is going to be used in the upcoming game Mecha Break, a new multiplayer mech battle game coming to PC, Xbox Series X/S, and PlayStation 5 in 2025. I can’t wait to see how gamers will break these AI NPCs.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
$16,000 humanoid robot ready to leap into mass production
Chinese robotics company Unitree announced that their more affordable humanoid robot, G1, is ready for mass production, with pricing starting at $16,000. The G1 boasts advanced features such as 3D LiDAR, a RealSense depth camera, noise-cancelling microphones, and a quick-release battery. The company also released a new video showing the G1 jumping and performing other acrobatic tricks.
In the first video since the announcement of the new all-electric Atlas, Boston Dynamics shows how good their robot is at doing push-ups.
Meet Boardwalk Robotics' Addition to the Humanoid Workforce
A new humanoid robot joins the party! Named Alex, it is made by Boardwalk Robotics, a company related to the legendary IHMC, the Institute for Human and Machine Cognition in Pensacola, Florida. The new robot consists only of a torso with two arms, with legs expected in the future. The release video shows how Alex can handle various tasks with great dexterity and speed. Boardwalk is currently selecting commercial partners for a few more pilots, and for researchers, the robot is available right now.
Tesla is hiring people to do the robot
Tesla is looking for people to train their upcoming humanoid robot. The position, titled Data Collection Operator, will require applicants to walk for over 7 hours a day while carrying up to 30 lb (about 14 kg) and wearing a VR headset and a motion capture suit. Those in this role will provide valuable data used to train the Tesla Bot in how to move and complete various tasks.
Ikea expands its inventory drone fleet
In pursuit of enhancing efficiency in its massive warehouses, Ikea has deployed 100 inventory drones across 16 European warehouses. Produced by Verity, these yellow and blue drones have been providing continuous inventory updates and reaching areas inaccessible to humans and most robots since 2021. While Verity's partnership with Ikea stands out, other companies like Corvus Robotics and Gather AI are also competing in the expanding inventory drone market.
What’s next for drones
The article discusses the evolving landscape of drone technology, highlighting four key developments: the growing use of drones by police forces, advancements in drone delivery services, the impact of the American Security Drone Act on domestic drone production, and the increasing reliance on autonomous drones in the Ukraine war. These developments raise important questions about privacy and security, as well as the ethical implications of autonomous weapons.
Amazon's delivery drones are so loud they are like a 'giant hive of bees’
As Amazon expands its Prime Air program in Texas from 200 to 469 flights a day, it faces a new problem—pushback from residents complaining about the noise the drones generate, with one resident describing the sound as “like a giant hive of bees." They are not alone in this sentiment—similar concerns about the noise have been raised in Australia and Nepal, where other drone delivery test programs are taking place.
🧬 Biotechnology
How we could turn plastic waste into food
A team of researchers is working on an innovative method of dealing with plastic waste. The idea is to create plastic-eating bacteria and feed them waste plastic, such as water bottles or food wraps. The bacteria would then be dried and converted into food suitable for human consumption. Before using the plastic-eating microbes as food for humans, the research team will submit evidence to regulators to demonstrate that the substance is safe, which they hope will happen in the next year or two.
💡Tangents
Want to Win a Bike Race? Hack Your Rival’s Wireless Shifters
Here’s another proof we live in a cyberpunk future. A team of security researchers from UC San Diego and Northeastern University has found a way to hack the wireless gear-shifting systems used by many top cycling teams, including those that compete in the Olympics and the Tour de France. Their relatively simple radio attack would allow cheaters or vandals to spoof signals from as far as 10 meters away, causing a target bike to unexpectedly shift gears or jamming its shifters to lock the bike into the wrong gear. I can now imagine a professional cyclist who’s also put some skill points into netrunning.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"