EU AI Act moves closer to becoming law - Weekly Roundup - Issue #445
Plus: Tesla Optimus Gen 2; small LLMs are getting better; a biocomputer built with human neurons; first CRISPR medicine approved in the US; and more!
Welcome to Weekly Roundup Issue #445. The main story of this week is the European Parliament and member states of the European Council reaching an agreement on the EU AI Act, making the act move closer to becoming law. We will review the new rules that will govern the use of AI in the European Union.
In other news, Tesla has showcased an updated version of their humanoid robot, Optimus. In the realm of artificial intelligence, Microsoft released Phi-2, a relatively small but very capable language model. Mistral AI released Mixtral - a Mixture of Experts model consisting of eight Mistral 7B models. DeepMind's AI has discovered a new solution to a scientific puzzle. Meanwhile, the FDA has approved the first CRISPR medicine in the US, and a team of researchers developed a biocomputer that combines human neurons and electronics.
On December 8th, 2023, the European Parliament and member states of the European Council reached an agreement on the AI Act, making the European legal framework for regulating AI very close to becoming law in the European Union. The new rules will define which applications of AI are banned in the EU and set rules for high-risk models as well as for foundation models and how the AI will be governed across all member states.
Banned applications
The EU AI Act specifies a set of applications of AI that are banned within the EU. The list of banned applications includes:
biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
emotion recognition in the workplace and educational institutions
social scoring based on social behaviour or personal characteristics
AI systems that manipulate human behaviour to circumvent their free will
AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)
Even though these applications of AI are banned in the EU, the legislation leaves an option to allow law enforcement to use real-time facial recognition and other remote biometric identification system in publicly accessible spaces. The usage of such systems by law enforcement agencies needs to have a judicial authorisation. According to the European Parlament’s press release, real-time biometric identification systems can be used in scenarios such as:
targeted searches of victims (abduction, trafficking, sexual exploitation), or
prevention of a specific and present terrorist threat, or
the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).
Some people are disappointed that the new rules leave a loophole for using AI in facial recognition, potentially opening the possibility for abusing AI in these scenarios. Amnesty International even calls that a “devastating global precedent”.
High-risk applications
The EU AI Act introduced a new category for high-risk AI systems. These are generally applications in which AI systems can have a significant impact on people’s health, safety, fundamental rights, environment, democracy, and the rule of law. Examples of such applications include:
Medical devices
Vehicles
Recruitment, HR and worker management
Education and vocational training
Influencing elections and voters
Access to services (e.g., insurance, banking, credit, benefits etc.)
Critical infrastructure management (e.g., water, gas, electricity etc.)
Emotion recognition systems
Biometric identification
Law enforcement, border control, migration and asylum
Administration of justice
Any system classified as high-risk will be required to comply with mandatory fundamental human rights assessment, transparency, and documentation requirements. Additionally, every high-risk AI system will be registered in a public database and will be required to comply with additional data governance (e.g., bias mitigation, representative training data) and transparency (e.g., Instructions for Use, technical documentation), and implement risk and quality management systems. EU citizens will have the right to launch complaints about high-risk AI systems and receive explanations for decisions impacting their rights. High-risk AI systems will need to have human oversight, enabling operators to understand the system's operations and to shut down the AI system if necessary.
Foundation models
The negotiation of the AI Act was nearly derailed by the issue of how to regulate foundation models, largely due to the efforts of Germany and France to protect their AI startups, Aleph Alpha and Mistral AI, respectively.
In the proposed law, foundation models are categorized under the General Purpose AI (GPAI) rules. These are rules for AI systems capable of performing a broad range of tasks. These systems must adhere to basic transparency rules, according to the document seen by Bloomberg. These include maintaining an acceptable-use policy, keeping current records of their training methodologies, and providing a detailed summary of the data used for training. Additionally, the GPAI systems are required to respect copyright laws. However, these rules do not apply to models that are free and open-source. Additionally, AI systems used for the sole purpose of research and innovation are not included in the requirements, as these are explicitly not covered by the scope of the AI Act.
Models deemed to pose a “systemic risk” will be subject to additional rules, according to the document seen by Bloomberg. The EU will determine this risk based on the computing power used in training the model. Developers of such highly capable models will be required to adhere to a code of conduct, pending the European Commission's development of more harmonized and long-term controls. Those failing to adhere will need to demonstrate their compliance with the AI Act to the Commission. The exemption for open-source models does not extend to those identified as posing a systemic risk. Additionally, models posing a systemic risk will be required to:
Report their energy consumption
Perform red-teaming, or adversarial tests, either internally or externally
Assess and mitigate possible systemic risks, and report any incidents
Ensure they’re using adequate cybersecurity controls
Report the information used to fine-tune the model, and their system architecture
Conform to more energy efficient standards if they’re developed
New governance structure
To enforce the rules outlined in the EU AI Act, a new governance architecture will be put in place, consisting of the AI Office and the AI Board.
An AI Office within the Commission will be tasked with overseeing the most advanced GPAI models, contributing to fostering standards and testing practices, and enforcing the common rules in all member states. The AI Office will be advised by a scientific panel of independent experts, which will inform the Office about the latest developments and how to react to them.
The AI Board, which will comprise representatives from the member states, will serve as a coordination platform and an advisory body to the Commission. It will play an important role in the implementation of the regulation, including the design of codes of practice for foundation models. An advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
Fines for breaking the rules
The fines for violations of the AI Act are determined as either a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. The fines are structured as follows:
Violations of Banned AI Applications: €35 million or 7% of the global annual turnover
Violations of the AI Act’s Obligations: €15 million or 3% of the global annual turnover.
Supply of Incorrect Information: €7.5 million or 1.5% of the global annual turnover.
Additionally, the provisional agreement sets more proportionate caps on administrative fines for SMEs and start-ups in the event of infringements of the AI Act's provisions.
What about supporting innovation?
The new rules aim to foster innovation and evidence-based regulatory learning. Significant changes from the initial Commission proposal include the enhancement of AI regulatory sandboxes which are now explicitly designed to permit the development, testing, and validation of innovative AI systems under real-world conditions. Additionally, new provisions have been introduced for testing AI systems in actual environments, subject to certain conditions and safeguards. To reduce the administrative load for smaller companies, the provisional agreement outlines specific support actions and allows for limited, well-defined exceptions.
When do the new rules come into force?
The new rules will have a phased entry into force after the law is adopted, as outlined in a detailed timeline provided by TechCrunch. Six months after the law's adoption, the rules on banned applications will come into force. Another six months later (or 12 months after the law's adoption), the rules for transparency and governance requirements will become law. Then, a year later (or two years after the law's adoption), all other requirements will be enforced. We may have to wait until the spring of 2026 for the full implementation of the EU AI Act.
The EU AI Act has cleared the major roadblocks and now faces mostly formalities—the final text will undergo votes in the parliament and, once accepted, the European Council will adopt it. Work will continue at a technical level in the coming weeks to finalize the details of the new regulation, such as determining the computing power threshold that will deem a foundational model a 'systemic risk'. The full scope of the rules for AI in the European Union will be known in the spring of 2024.
The full text of the EU AI Act can be read here.
If you enjoy this post, please click the ❤️ button or share it.
I warmly welcome all new subscribers to the newsletter this week. I’m happy to have you here and I hope you’ll enjoy my work. A heartfelt thank you goes to everyone who joined as paid subscribers this week.
The best way to support the Humanity Redefined newsletter is by becoming a paid subscriber.
If you enjoy and find value in my writing, please hit the like button and share your thoughts in the comments. Additionally, please consider sharing this newsletter with others who might also find it valuable.
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🧠 Artificial Intelligence
OpenAI Demos a Control Method for Superintelligent AI
OpenAI's superalignment program, launched earlier this year to control or align superintelligent AI systems with human goals, has released its first paper exploring the concept of 'weak-to-strong generalization.' In the paper, researchers experimented with a unique approach: using a less advanced AI model, GPT-2, to supervise a more advanced one, GPT-4. This experiment aimed to understand if a less capable model could effectively guide a more powerful model. The results were promising, showing that the stronger AI outperformed the weaker one, especially in natural language processing tasks, even when following imperfect instructions. These results suggest potential in managing superintelligent AI systems, but they also raise concerns about the possibility of such AIs ignoring erroneous human instructions, emphasizing the need for further development in this area.
Imagen 2 on Vertex AI is now generally available
Imagen 2, Google’s most capable text-to-image generator, is now available on the Vertex AI platform for developers to use in their applications. According to Google, Imagen 2 can, apart from generating impressive photorealistic images, also be used to generate new logos, add text in multiple languages, and perform 'visual question answering.' Each image generated by Imagen 2 is watermarked with SynthID, making it easier to verify if the image is real or generated.
Phi-2: The surprising power of small language models
Microsoft has released a new large language model named Phi-2. First announced at the Microsoft Ingite conference a month ago, Phi-2 is a relatively small, 2.7 billion-parameter model that, thanks to better training methods and better usage of training data, outperforms much larger models, such as Mistral 7B or Meta’s Llama 2 7B, 13B and even 30B models. These new training methods not only resulted in better performance but also made Phi-2 less likely to produce toxic sentences compared to benign ones. Phi-2 represents an interesting trend in large language models - a shift from pursuing the model’s parameter size towards smaller but way more capable models. This a trend we can expect to produce some interesting results in 2024.
Mistral AI releases Mixtral 8x7B mixture of experts model
Mistral AI, a French AI startup that recently reached the $2B valuation mark, has released an interesting model. Mixtral is a sparse Mixture of Experts models, combining together eight Mistral 7B models into one larger system. According to Mistral AI, Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. Mixtral is an open source model and it is available for download on Hugging Face.
Google DeepMind used a large language model to solve an unsolvable math problem
Google DeepMind has published a paper on a new model named FunSearch, which has discovered a new solution to a long-standing scientific puzzle. Unlike AlphaTensor and AlphaDev, which were more akin to AlphaZero, FunSearch (where “Fun” stands for functions, not for fun) is a system combining a large language model called Codey (a code-completion model based on Google’s PaLM 2) with evolutionary algorithms and an evaluator to find a solution to a problem set by researchers. I see some parallels between FunSearch and what will be the next breakthrough in AI research: giving AI models the capability to “think deeply” about how to solve complex problems.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
Tesla Unveils New Humanoid Robot
Tesla has shown a new, lighter and more capable Optimus Gen 2 humanoid robot. The new robot has new and improved sensors, including tactile sensing in all fingers (in other words, it can feel the things it touches). The demo is quite impressive, especially when compared to where Tesla was with Optimus just six months ago. The company said that the first production units will arrive by the end of 2023 and should be commercially available around 2027. Tesla is not the only company working on bringing commercial humanoid robots and you can learn about other teams in this article where I highlighted more companies promising to offer commercial humanoids within the next few years.
ANYmal’s Wheel-Hand-Leg-Arms Open Doors Playfully
Researchers from ETH Zurich took an off-the-shelf ANYmal quadruped robot, replaced its legs with wheels, and taught it to stand and manipulate objects using its wheels as hands. The end result is a quite bizarre-looking robot that can transform between wheeled and standing forms. In the video, the researchers present a curiosity-driven reinforcement learning approach to train the robot. This approach encourages the robot to interact with specific objects, thereby learning to effectively use its body to achieve tasks such as opening doors or moving boxes.
Amazon Industrial Fund leads Instock seed round for novel fulfillment robots
Instock, a robotics company specializing in customer fulfilment robots, has raised $3.2 million, bringing its seed round to a total of $6.2 million. Amazon Industrial Fund led the latest round, potentially paving the way for Instock’s robots to be deployed in Amazon’s warehouses, where they would join an existing fleet of over 750,000 robots. Agility Robotics, known for its humanoid robot Digit, which recently began tests at Amazon, is another beneficiary of the Amazon Industrial Fund.
Miso Robotics and Cali Group open automated restaurant
Miso Robotics, known for its AI-powered cooking robots, is collaborating with Cali Group and PopID to launch CaliExpress, an automated restaurant in Pasadena, California. This venture is touted as the world's first fully automated restaurant, combining Miso's robotic cooks for grilling and frying with PopID's biometric payment system. CaliExpress, featuring the robotic fry station Flippy, offers a menu of freshly made wagyu blend burgers, cheeseburgers, and fries, promising high-quality food at competitive prices.
🧬 Biotechnology
The First CRISPR Medicine Is Now Approved in the US
The U.S. Food and Drug Administration (FDA) has approved Casgevy, a pioneering CRISPR gene editing-based medical treatment developed by Vertex Pharmaceuticals and CRISPR Therapeutics, for treating sickle cell disease. This ground-breaking therapy, already approved in the UK, involves editing patients' cells outside the body to produce healthy haemoglobin, offering a potential lifelong solution for adults and children over 12 suffering from frequent pain attacks caused by the disease. Additionally, the FDA approved another gene therapy, Lyfgenia by Bluebird Bio, which adds a therapeutic gene to cells without using CRISPR. However, Lyfgenia comes with a black box warning due to the associated risks of blood cancer. These advancements mark a significant milestone in gene therapy, providing hope for transformative treatments for sickle cell disease sufferers.
‘Biocomputer’ combines lab-grown brain tissue with electronic hardware
Researchers have created Brainoware, a biocomputer that fuses lab-grown human brain tissue with electronic circuits. This device, which blends stem cell-derived brain organoids with traditional electronics, demonstrates the potential for using real neurons in AI applications and has potential applications in neuroscience research. To assess its capabilities, Brainoware was tested for speech recognition and was able to identify speakers successfully with 78% accuracy. This development not only marks a step towards biological computing but also offers a new approach to brain research and the study of neurological disorders. However, challenges remain in maintaining the brain organoids and enhancing their capabilities for more complex tasks.
‘It’s all gone’: CAR-T therapy forces autoimmune diseases into remission
CAR-T-cell therapy has shown remarkable results in treating autoimmune disorders. In a recent study presented at the American Society of Hematology meeting, 15 participants with autoimmune conditions remained symptom-free or nearly so following the treatment. This therapy involves modifying T cells to produce proteins targeting B cells, which in autoimmune disorders attack healthy tissue. The success of these treatments, which have also shown minimal side effects, suggests a promising future for treating a variety of autoimmune diseases. However, the exact role of the accompanying chemotherapy in these outcomes is still being assessed.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!