Shaping the Future of AI: Who Will Be Granted the Keys to Innovation?
The conversation about regulating AI is not only about AI safety but also about the power and control over who will be allowed to work on frontier models
Last week, significant events unfolded in the realm of AI regulation, marking pivotal moments that could shape the future trajectory of artificial intelligence. At Bletchley Park, policymakers and tech leaders from around the globe convened to debate the future of AI governance. Simultaneously, across the Atlantic, President Biden issued an Executive Order laying the groundwork for future AI regulations in the US.
Yet the conversation about regulating AI is not only about AI safety; it’s equally about the concentration of power and determining who gets to work on cutting-edge models. This article explores the latest developments, the subtleties of proposed AI regulations, and the prospective ramifications for the entire AI sector.
The First AI Safety Summit
On November 1st and 2nd, 2023, Bletchley Park welcomed representatives from 28 countries, leading AI companies, and academics to discuss regulating artificial intelligence at the first AI Safety Summit. The symbolism of this special place in the British history and the history of computing is unmissable. It was here that codebreakers cracked the Enigma code, helping end World War II. The world’s first programmable digital computer, the Colossus, was also constructed here. And now, world leaders gathered there for the inaugural AI Safety Summit.
Over two days, “like-minded governments and artificial intelligence companies”, as UK Prime Minister Rishi Sunak described it, discussed the potential threats, both short- and long-term, posed by the misuse of AI and explored how to regulate the AI industry.
The Bletchley Declaration
The main outcome of the summit was the signing of The Bletchley Declaration. Signed by 29 countries attending the summit (including the US, EU and China), the declaration acknowledges the transformative potential of AI and its increasing role in various domains of daily life. AI has “the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”, says the declaration. “We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally”.
Signatories commit to designing, developing, and using AI in ways that are safe, human-centric, trustworthy, and responsible. They acknowledge the unique opportunities AI presents for enhancing global well-being and the potential risks, especially from advanced "frontier" AI models that might surpass current capabilities in areas like cybersecurity and biotechnology.
Emphasizing the global nature of AI challenges, the declaration calls for international collaboration, engaging various stakeholders from nations to academia. There's an urge to adopt pro-innovation regulatory approaches that balance AI's benefits and risks, while ensuring transparency and safety, especially for powerful and potentially harmful AI systems.
To ensure responsible use of AI, signatories pledge support for an international research network on AI safety. The UK and the US have already announced the launch of their own AI safety institutes and we can expect other countries to follow soon.
A Collaborative Commitment to AI Safety Testing
Another commitment from the summit was on AI Safety Testing. To boost public confidence in AI safety, leaders of countries invited to the summit and representatives of major tech companies, such as Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, Open AI and xAI, emphasized the necessity for rigorous assessment of new AI models' risks, with the safety of such AI systems being a collective responsibility. The role of the governments will be in setting AI standards and in ensuring that external evaluations of frontier AI models are conducted based on their local legal frameworks, particularly concerning national security and societal risks. Plans include increased investment in public sector AI testing capabilities, sharing evaluation outcomes, and developing shared safety standards. A list of the countries that had signed up for the safety testing collaboration did not include China, whose representatives were not included in the second day of talks.
The AI Safety Summit at Bletchley Park was the first of many to come. The next one is scheduled to take place in six months in South Korea and the third summit will take place in France in one year.
The Executive Order on AI
Another big thing that happened last week in the world of AI regulations was the US President’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence (the full text of the EO can be found here). The EO aims to establish standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, support consumers and workers, foster innovation and competition, enhance American leadership globally, and ensure responsible government use of AI.
The EO is mostly setting up the ground for future AI legislation and preparing federal agencies to think about AI in their respective areas of responsibility. The EO requires each Cabinet agency to appoint a new Chief AI Officer who will be responsible for any new AI-related guidelines and frameworks. It also requires many government agencies to create guidelines for the safe development and deployment of AI models and the impact those models could have on the safety and security of the country.
The EO also includes streamlining the visa process for top AI talent and enticing them to move to work in the US. There are also plans to attract AI talent to work in the government (which might be challenging looking at how much an AI engineer can earn in Silicon Valley and what the government positions usually offer).
But there was one part of EO that caught the most attention.
The Controversy Over Compute Thresholds
What caught the attention of the AI community was the requirement to send to the US government “safety test results” of any AI model if the computing power required to train that model reaches a certain threshold. Any model trained on 10^26 FLOPS (floating-point operations per second) and any computing cluster with 10^20 FLOPS/second capacity has to send the safety reports to the appropriate federal agency. If the model is trained on biological data, the threshold lowers to 10^23 FLOPS.
The rule to report AI models based on computing power is very similar to what Sam Altman proposed to the US Senate in May. There is no explanation for why those specific numbers have been chosen. I suspect those numbers have been selected so that the current AI models don’t have to undergo additional safety assessments. As far as I know, none of the AI models available now crosses those thresholds (GPT-4 is estimated to have used about 2.1 * 10^25 FLOPS of computing power to be trained) However, the new generation of LLM models, such as GPT-5, Google Gemini and other models that haven’t left labs yet, might go over the thresholds.
The idea of computing thresholds stirred some controversy amongst AI researchers. As some experts point out, you don’t need a huge model to cause harm. For example, a group of researchers fine-tuned Llama-70B to being very close to reconstructing the Spanish Flu virus. The computing power needed to train Llama models was below the thresholds. Llama-70B model can run on a top gaming PC or a relatively cheap AI workstation. One might argue that having a sequence of a deadly virus that killed 25–50 million people is not enough - you still have to have skills and equipment to make that virus. But even that could become even more accessible, thanks to various projects introducing LLM agents to help conduct scientific research or design experiments.
There is also an assumption that the computational power required to run even more advanced models will go up. But that is not guaranteed. Researchers and engineers will find a way to make the AI models more efficient. That then raises questions. What will happen if someone finds a way to run these models at the same level of performance but using less computing power? Or what will happen if someone creates a new chip that slashes the required computing power by orders of magnitude? And what about small but fine-tuned models? Instead of fixing the threshold on computing power, some experts argue that the models should be evaluated based on capabilities. If, for example, a model can describe how to make a deadly virus, then it should be under stricter regulations, no matter how big or small it is.
What is interesting is that the EO requires developers of the most powerful AI systems (those that exceed the threshold) to share their safety test results and other critical information with the US government under the Defense Production Act - an act originally designed for wartime and national defence purposes. The wording of the EO suggests that advanced AI can be treated as a potential military asset or as an asset vital to national security.
The question of open source and who will be allowed to work on AI
Open source is a tricky variable in the conversation on regulating AI. Without open source and open collaboration, AI wouldn’t be where it is today. At the same time, open-source models can be used for malicious purposes.
The ideals of open source and free collaboration are deeply integrated within the software industry and software engineering community. Open source projects play a vital role in advancing software, from operating systems running computers in server farms to programming languages, tools and libraries the software is built with. It is open source where many innovative solutions were born which then went on to transform how we interact with computers and what we can do with them.
The artificial intelligence community was built on top of open source and open collaboration. Frameworks such as TensorFlow or PyTorch, which are used to build many AI models, are open-source and free to use. Some models themselves are open-source and free to use. At any moment, you can go to Hugging Face, download one of many AI models available and start playing with it. This low barrier to access enables many people to join the AI community and start contributing. But it also opens a possibility for misuse. As we have seen when discussing the idea of computing thresholds, it is relatively easy to use open-source models to cause damage. As the attendees of the AI Safety Summit noted:
While open access models have some benefits like transparency and enabling research, it is impossible to withdraw an open access model with dangerous capabilities once released. This merits particular concern around the potential of open access models to enable AI misuse, though an open discussion is needed to balance the risks and benefits.
Biolabs and AI: A Possible Future Parallel?
AI research and development might follow how research and development is done in biotech. It is possible that creating and running an AI lab could be regulated in the same way as biolabs are, with multiple safety levels based on the risks associated with the work done in the lab. People who want to work in such a lab may have to get safety checks or a licence first before being allowed to work there. A licensing system for AI research and development has not been proposed in the EO but it was something Sam Altman proposed in May before the US Senate. It is possible that such a system could be implemented in the US or other countries. This path leads to excluding smaller, less funded teams from participating in cutting-edge research, and to concentrating the AI research and development in the hands of a small number of big players, such as big tech companies or large universities. Something that big players might want to happen under the guise of AI safety.
The Intersection of Power, Control, and AI Regulations
Shortly after GPT-4 was released and the entire world went crazy about AI, Sam Altman began warning about risks coming from AI and the need for regulations. His fears could be sincere but as a leader in the AI space, Altman and others like him have a lot of influence. The big players in AI might have realised that after failing to regulate social media, governments around the world would not want to make the same mistake again, but this time with AI. Instead of having the rules forced on them, the big players in the AI space are cooperating with the legislators to have rules that benefit them.
This is a view that Andrew Ng and Yann LeCun, both accomplished AI researchers and deep learning pioneers, hold. In a post on X, LeCun calls industry leaders such as DeepMind’s CEO Demis Hassabis, OpenAI CEO Sam Altman, and Anthropic CEO Dario Amodei as those “who are attempting to perform a regulatory capture of the AI industry”. LeCun does not think AI is going to wipe out humanity. Instead, he fears that the current trajectory will lead to a small number of companies controlling AI, mostly based in the US and China.
“The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet,” writes LeCun. Andrew Ng, meanwhile, wrote in a post that “some lobbyists for large companies — some of which would prefer not to have to compete with open source — are trying to convince policymakers that AI is so dangerous, governments should require licenses for large AI models.”
Both Ng and LeCun argue that the direction in which the AI regulations are heading will negatively impact innovation in the AI space.
The conversation about AI regulations is not only about ensuring that AI is safe or about minimising potential bad outcomes. It is also a conversation about power and control. The upcoming regulations will decide who is allowed to work on the cutting-edge frontier models and who is not. It will choose the winners and the losers in this game. I hope that humanity as a whole will end up as one of the winners.
Thanks for reading. If you enjoy this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!