What Sam Altman said at the Senate hearing - H+ Weekly - Issue #415
This week - Bing Chat to get new features; robotic arms from Japan; a new organelle has been found; rapid drug discovery with AI; and more!
The AI industry regulation is currently a major topic of discussion in the field of artificial intelligence. While the EU is nearing the passage of the AI Act and China has introduced new rules for generative AI, the US has lagged behind and only recently initiated conversations on how to tackle the issue of regulating the rapidly growing and disruptive AI industry.
The United States Senate Judiciary Subcommittee on Privacy, Technology, and the Law launched a series of hearings to gather input from industry leaders, academics, and researchers in order to gain a clearer understanding of the necessary rules.
For the first hearing, the committee invited:
Sam Altman (CEO and cofounder of OpenAI)
Gary Marcus (Professor Emeritus at New York University, AI researcher and notable critic of AI hype)
Christina Montgomery (Vice President and Chief Privacy & Trust Officer at IBM)
In his opening remarks, Sam Altman said that his “worst fears are that we [the industry] cause significant harm to the world” but he hopes the technology his company, OpenAI, develops could help address humanity’s biggest challenges, such as cancer or climate change.
Gary Marcus was more straight saying that “we have built machines that are like bulls in a China shop - powerful, reckless and difficult to control” and that we are “facing a perfect storm of corporate irresponsibility, widespread deployment lack of adequate regulation and inherent unreliability”.
How should the US regulate the AI industry?
The main question was not if but which approach should the US government take to regulate the growing AI industry.
Christina Montgomery was in favour of an approach similar to the upcoming EU AI Act. These rules are "regulating by context" and could provide a good example for the US to follow.
Sam Altman and Gary Marcus proposed a licensing system overseen by an independent agency. This approach was compared to the nuclear industry where everyone who wants to operate in that industry needs to get a licence.
Altman made it clear he does not want the licensing system to slow down the progress of AI or to limit AI research and development to big companies or large institutions. These regulations, whatever they will end up being, should not throttle innovation and should not impact smaller players, like startups, researchers and open-source projects, he argues.
The burden of obtaining a licence and being under special scrutiny should be placed on companies and institutions that are big enough to afford that.
The license will be required only if the AI system meets certain criteria. The easiest way would be, according to Altman, to base these criteria on the amount of computing power needed to train the AI. Ideally, however, the criteria should be based on the capabilities of the AI. For example, if the model can persuade and manipulate people to do something, then it should be regulated and required a licence, argues Altman.
The proposed agency would not only have the power to grant a licence but also to revoke it. The AI agency should also monitor activities within the AI field and perform reviews before and after deployment. Marcus also suggested drawing inspiration from the FDA's approval system for the development of AI regulations.
In order for the licensing model to be effective, an independent agency or researchers must have the ability to inspect the AI model. Gary Marcus has advocated for increased transparency from AI companies regarding the data used to train the models and opening them up for independent assessment.
In addition to creating a new agency, Altman proposed creating a set of safety standards for AI models, testing whether they could go rogue and start acting on their own.
Another interesting idea floated during the hearing was a Constitution for AI - a declared set of values and boundaries for AI models to operate within.
Impact on jobs
The discussion also addressed the impact of AI on jobs, a topic that Senator Blumenthal emphasized as more significant in the long term than existential threats posed by AI.
Regarding the potential impact on jobs, Altman said that he expects “there to be significant impact on jobs but exactly what that impact looks like is very difficult to predict”. However, he maintained an optimistic stance, saying that there will be “far greater jobs on the other side”.
Montgomery said the “most important thing we need to do is prepare the workforce for AI-related skills” through training and education.
Misinformation and elections
Another area of great concern is misinformation, particularly with the upcoming presidential elections in the US. "Given that we're gonna face an election next year, and these models are getting better, I think this is a significant area of concern," admitted Altman.
The hearing even opened with Senator Richard Blumenthal delivering part of his opening speech generated by ChatGPT and read by an AI that cloned his voice. “What if it [the AI] had provided an endorsement of Ukraine surrendering or Vladimir Putin's leadership”, he asked.
Everyone at the hearing acknowledged the potential chaos and misinformation generative AI can cause.
Senator Blumenthal drew a comparison to social media, its influence and impact on society, and the failure of the US government to step in earlier and put rules in place.
Gary Marcus referred to the influence tech companies have saying that “there is a real risk of a kind of technical technocracy combined with oligarchy where a small number of companies influence people's beliefs through the nature of these systems”.
Sam Altman denied that ChatGPT is designed to increase user engagement and time spent on the platform. “Actually we'd love it if they [users] use it [ChatGPT] less because we don't have enough GPUs”, said Altman.
Altman stated that OpenAI is not an advertisement company. It is not building user profiles to be later used in targeted ads and it is not planning to introduce ads in ChatGPT anytime soon. Marcus countered saying that hyper-targeted ads will eventually happen and pointed to Microsoft incorporating ads in Bing Chat results.
Marcus reiterated that more transparency and access to how the algorithm works could minimalise the risk of generative AIs used to spew false information.
What else did we learn?
We have gained further insight into OpenAI and Sam Altman. Altman reiterated that OpenAI is not currently training GPT-5 and has no intentions of doing so within the next six months. Additionally, Altman disclosed that he does not possess any equity in OpenAI and he is only paid enough to cover his health insurance.
The full, almost 3-hours long hearing, can be viewed on YouTube.
🦾 More than a human
NewLimit, cofounded by Coinbase CEO Brian Armstrong, raises $40M to extend life
TechCrunch sat down with Coinbase CEO Brian Armstrong to talk about NewLimit - a longevity startup he cofounded in 2021 - to talk about which specific aspect of ageing the company is focusing on, what makes NewLimit stand out and to ask Armstrong what he thinks about the recent influx of money into longevity space from other billionaires.
Bio-inspired: developing technology to mimic the function of skin
Scientists at UNSW Sydney have created an artificial skin capable of detecting mechanical stimuli and monitoring physiological signals, including wrist pulse, respiration and vocal cord vibration. The artificial skin can recognise simple gestures with a high success rate, such as a thumbs up, or a fist, but more work needs to be done to recognise more complex gestures.
🧠 Artificial Intelligence
▶️ Enter PaLM 2 (New Bard): Full Breakdown - 92 Pages Read and Gemini Before GPT 5? (17:17)
I was planning to write a breakdown of what Google said about PaLM 2 in the PaLM 2 Technical Report but I don’t think I can do it as well as AI Explained did in this video. AI explained dives into all 92 pages of the technical report and explains what it contains and what can we infer about Google’s state-of-the-art large language model and how it compares to OpenAI’s GPT-4. AI Explained also points out an interesting and a bit concerning thing - unlike OpenAI, Google’s report does not contain much in the AI risk analysis and it seems Google wants to accelerate its AI capabilities quickly, including training Gemini - Google’s next LLM model - to be able to remember and plan actions.
Language models can explain neurons in language models
Researchers from OpenAI present the results of their research project exploring how to automate the alignment research work. To do that, they used GPT-4 to produce and score natural language explanations of neuron behaviour and apply it to neurons in another language model. In the example shown, they used GPT-4 to explain GPT-2 neurons. OpenAI admits this method has shortcomings and needs more work but it opens a possibility of automating AI alignment work that scales with the technology itself.
Microsoft doubles down on AI with new Bing features
Microsoft announced new features coming to Bing. First of all, the new Bing is now available waitlist-free but from what I have experienced myself, you will have to use Edge. The upcoming features include the ability for Bing Chat to include images and charts in the responses and to take images as inputs. Bing Chat will also get access to plugins, allowing third-party partners to extend what Bing Chat can do. Microsoft’s representative “hinted, but wouldn’t confirm, that the Bing Chat plugins scheme was associated with — or perhaps identical to — OpenAI’s recently introduced plugins for ChatGPT”, TechCrunch reports. Apart from that, Bing Image Creator will be able to understand prompts in over 100 languages.
Anthropic releases Claude’s Constitution
Anthropic released a set of rules and principles that Claude - their competitor to GPT-4 - must follow. The Constitution aims to create an AI that produces non-toxic and non-discriminatory outputs, while also preventing any assistance in illegal or unethical human activities. Additionally, it strives to develop an AI system that actively contributes to being helpful, honest, and harmless. The Constitution emphasizes, among other things, selecting responses that minimize implications of the AI having a body or expressing desires for self-improvement, preservation, or replication.
Google tells staff it plans to limit publishing AI research to 'compete and keep knowledge in house' as its rivalry with Microsoft's OpenAI heats up
One of the side effects of the ongoing rivalry in AI between Google and Microsoft might be the end of openness and freedom of sharing in AI research. Business Insider reports that Google employees were told to be more selective about the research they publish. "We're not in the business of just publishing everything anymore," one Google Brain staffer described as the message from upper management. Leaders have set the tone that "now it's time to compete and keep knowledge in-house," they added. Another thing that could prompt this move is the fact that the research Google did in AI laid the foundation for OpenAI to take the market by storm, leaving Google to catch up.
UK competition watchdog launches review of AI market
UK’s Competition and Markets Authority is launching a review of the AI market. The review will look at how the markets for foundation models could evolve, what opportunities and risks there are for consumers and competition, and formulate “guiding principles” to support competition and protect consumers. The initial review, described by one legal expert as a “pre-warning” to the sector, will publish its findings in September.
🤖 Robotics
Roboticists and artists from Japan present Jizai Arms - a backpack that gives up to six extra robotic limbs. They show what the arms can do in the video which is more like a performance and a social study than a practical demonstration. Nevertheless, it is still beautiful. If you want to learn more about the arms and what the process of making them looked like, check the paper describing the project, which also includes a short, 10-minute long video summarising the paper.
Introducing Intrinsic Flowstate
Intrinsic, an Alphabet-owned robotics company, introduces Flowstate - a robotics development platform designed to make programming and testing robots as simple and accessible as setting up a website or creating a mobile app, as the company stated in the release keynote. Flowstate is in the beta stage right now.
ANYBotics raises $50 million to help deploy its robot dog
ANYbotics, the Swiss robotics company that develops four-legged robots, has raised $50 million in a Series B funding round to expand and accelerate the development of their quadruped robots for customers in the gas/oil and chemical industries. Spun off from ETH Zurich, ANYbotics is known for their ANYmal series of four-legged robots, which were used to research and develop the technology at institutions such as the University of Oxford and ETH Zurich before being made available to industrial customers.
🧬 Biotechnology
A new organelle has been found in cells
Rockefeller University researchers have discovered a new organelle inside the gut cells of the fruit fly. The new organelle, called PXo bodies, stockpiles phosphate, an electrolyte essential to life. When faced with a shortage, it releases its reservoir in the form of phospholipids, which are a key component of the membrane structure of cells. The discovery may spark a search for phosphate-storing organelles in other animals — including humans.
AlphaFold works with other AI tools to go from target to hit molecule in 30 days
Researchers have combined AlphaFold with two other AIs to create an end-to-end AI drug discovery process even when a protein structure is not known. These three AIs - PandaOmics, AlphaFold and Chemistry42 - were able to find possible therapeutic targets and create proteins binding to these targets to be tested by researchers. Using this process, researchers were able to predict a novel drug-like small molecule against a new target for liver cancer in just 30 days, demonstrating how AI can design bespoke therapeutics rapidly and accurately.
Scientists Create Cyborg Bacteria
Scientists have implanted an artificial hydrogel scaffold into bacteria to create semisynthetic “cyborg cells” that could one day function as tiny robots in medicine, environmental cleanups and industrial production, according to a recent study in Advanced Science.
H+ Weekly is a free, weekly newsletter with the latest news and articles about AI, robotics, biotech and technologies that blur the line between humans and machines, delivered to your inbox every Friday.
Subscribe to H+ Weekly to support the newsletter under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
A big thank you to my paid subscribers and to my Patreons: whmr, Florian, dux, Eric and Andrew. Thank you for the support!
You can follow H+ Weekly on Twitter and on LinkedIn.
Thank you for reading and see you next Friday!