Is AI going to get a reality check? - H+ Weekly - Issue #436
This week - OpenAI contemplates making its own AI chips; State of AI Report 2023; AGI is already here; Disney presents a cute robot; the rise of "godbots"; and more!
Generative AI and large language models took the mainstream by storm. Within two months of its launch in November 2022, ChatGPT reached 100 million users, becoming the fastest-growing app at the time. VC firms poured billions into companies working on generative AI. Every company, from massive corporations to small startups, was asking itself not if, but how to use AI in their business.
Almost a year later, the situation looks different. The adoption of ChatGPT and Bard is not as high as people previously thought and the traffic to ChatGPT's website was going down over the summer. Gartner placed generative AI on the "Peak of Inflated Expectations," expecting it to be on a trajectory towards disappointment. It seems the excitement surrounding generative AI and large language model chatbots is waning.
On Tuesday, the analyst firm CCS Insight published a report predicting that the AI industry would experience a “cold shower” next year as reality sets in. The truth is that training and running these massive models is expensive on several fronts. Firstly, acquiring the hardware to run these models is pricey. The latest Nvidia chips are hard to come by, and their prices are inflated. Secondly, advanced AI models require a vast amount of computing power, which translates to high electricity bills.
A report from The Wall Street Journal highlighted another issue AI companies face: their business models aren't profitable. The report cites Github Copilot as a primary example. This service, used by over 1.5 million programmers to expedite code writing, costs $10 per month. However, according to the report, GitHub loses $20 per user monthly, with some users costing the company up to $80 per month (a claim Nat Friedman, who ran Github, denies). Another report from The Washington Post, published in June, suggested that chatbots like ChatGPT lose money every time people use them.
The rising costs of operating generative AI services limit companies' choices. One option is to raise the price of AI-enhanced services. That’s the path Microsoft is taking with Office 365 subscriptions for business. It’s going to cost the customers an additional $30 a month for AI tools. Google is planning a similar approach for their Workspace services. Another avenue is to invest in developing more compact models, a path Zoom has chosen. That’s the path Zoom has taken. If those options don’t work, there is always the option to use less powerful and cheaper AI models.
While these are the technical and business challenges AI companies face in the immediate future, another looming issue is AI regulations. It is unlikely that the US will introduce AI regulations anytime soon but the EU is on track with their AI Act. Once enacted, specific AI applications, like facial recognition, will be prohibited in the EU. The new rules will require generative AI model developers to undergo independent reviews before releasing their models to the general public.
Even the methods we evaluate large language models are being questioned. Early in the year, numerous articles claimed that GPT-4 outperformed most humans in entry exams or coding challenges. But there is little agreement on what those results really mean. A recent article published in MIT Technology Review states that there is a growing number of researchers who want to overhaul the way we test AI and ditch the practice of scoring machines on human tests. Researchers point out that these tests are brittle and may be misleading, emphasizing memorization over reasoning. Yet, the media often amplifies these possibly flawed test outcomes, intensifying the hype.
Generative AI is still a relatively new technology. And like every new, exciting technology before it, it is being overhyped, both in positive and negative ways. AI researchers and engineers will find a way to run AI models more efficiently and the business will find a sustainable business model. Perhaps, a cold reality check is precisely what the AI community needs at the moment.
If you enjoy this post, please ❤️ or share it.
Next week marks another trip around the sun for me 🎂. To celebrate the occasion, I'm offering a limited-time 20% discount on the first year of a paid subscription.
If you enjoy and find value in my writing, please hit the like button and share your thoughts in the comments. Share the newsletter with someone who will enjoy it, too.
You can also buy me a coffee if you enjoy my work.
🦾 More than a human
Bioprinted skin heals severe wounds in pigs, humans are next
Scientists from the Wake Forest Institute for Regenerative Medicine have created a bioprinted skin that functions like natural skin. It can be used to treat wounds, burns, and various other types of skin injuries. The test on pings resulted in improved wound closure and skin regeneration. “This work represents an advancement in the bioengineering of skin substitutes to enhance the regeneration and production of native-like skin and suggests that bioprinted skin may be applicable for human clinical use,” researchers say.
This robotic exoskeleton can help runners sprint faster
South Korean researchers have built an exoskeleton that makes people run faster. “Although this is a preliminary study, we can say the exosuit can augment the human ability to run,” says Giuk Lee, an associate professor at Chung-Ang University in Seoul, who led the research. During the tests, researchers found that the exoskeleton allowed people to run a distance of 200 meters 0.97 seconds faster.
Rice-engineered material can reconnect severed nerves
Researchers from Rice University have found a way to successfully use magnetoelectric material (material that can turn magnetic fields into electric fields) to stimulate neural tissue in a minimally invasive way and help treat neurological disorders or nerve damage. Previous attempts were failing because neurons had a hard time responding to the shape and frequency of the electric signal resulting from this conversion. The new material solves this issue and performs magnetic-to-electric conversion 120 times faster than similar materials.
🧠 Artificial Intelligence
ChatGPT-owner OpenAI is exploring making its own AI chips
OpenAI is looking at making its own AI chips and has gone as far as evaluating a potential acquisition target, according to people familiar with the company’s plans, Reuters reports. OpenAI hasn’t pulled the trigger yet and is exploring other options, such as a closer partnership with Nvidia (which has an 80% market share for AI chips) or diversifying its AI chip suppliers beyond Nvidia.
State of AI Report 2023
The annual State of AI Report is out. Unsurprisingly, the theme of this year was generative AI and large language models. The report also highlights the rise of Nvidia by calling compute the new oil, the influx of investments into generative AI startups (the report states $18 billion was invested into them) and the AI safety and regulations being one of the main conversations this year. The full report (all 163 slides) can be viewed here.
What’s changed since the “pause AI” letter six months ago?
It’s been six months since the letter calling to stop giant AI experiments was published. As we can see now, the letter did nothing to stop the AI research. MIT Tech Review sat down with Max Tegmark, one of the authors of the letter, to discuss it. Tegmark says the letter succeded in making the AI risks a public conversation. He also acknowledged that the letter did not stop the AI research nor it sped up the AI regulations in the US. When asked about mistakes to avoid now, Tegmark replied with: “1. Letting the tech companies write the legislation. 2. Turning this into a geopolitical contest of the West versus China. 3. Focusing only on existential threats or only on current events. We have to realize they’re all part of the same threat of human disempowerment. We all have to unite against these threats.“
AI Jesus? Experts question wisdom of godbots
Shortly after ChatGPT was released, a number of chatbots specifically designed to give advice on moral and ethical questions have shown up. Some of them were religious, emulating Krishna, Jesus and other deities. In this interview, a professor of anthropology at the University of Michigan dissects this phenomenon, why do humans seek answers for ethical questions in AI and if there is any danger in these kinds of chatbots.
Artificial General Intelligence Is Already Here
Blaise Agüera y Arcas, vice president and fellow at Google Research, and Peter Norvig, an influential computer scientist, proclaim in this essay that AGI has already arrived. They argue that models such as ChatGPT, Bard, LLaMA and Claude have achieved the most important parts of AGI. Because of that, decades from now, we will see them as the first examples of AGI, similar to how we see ENIAC as the first general-purpose electronic computer. The pair also addresses four main reasons why people are reluctant to admit that AGI is here: the metrics we use to measure AI, ideological commitment to alternative AI theories, devotion to human (or biological) exceptionalism and economic implications of AGI.
Welcome to the AI gym staffed by virtual trainers
Lumin Fitness is a gym where instead of human coaches there are AI coaches. The gym-goers can choose from an app what kind of training they want to do and the personality of their coach. The virtual coach then tracks, with the help of many sensors inside the gym, how the exercises are performed and guides the customers through the training. Some experts think the idea of virtual coaches can catch up and be beneficial to some people. The recent rise of AI-powered therapy and companion bots shows some people feel more comfortable interacting with machines than they might with fellow humans.
🤖 Robotics
How Disney Packed Big Emotion Into a Little Robot
While every other roboticist is focusing on getting their bipedal robots to reliably walk, engineers at Disney are focusing on something else - giving their robots a character and conveying emotions. That’s what their newest robot is doing. It is a small, adorable and very expressive robot. It is mostly 3D printed, built using modular hardware and actuators that made it quick to design and iterate on, going from an idea to a working robot in less than a year. A big part of the project was the reinforcement learning pipeline to quickly turn the animator’s vision into expressive movement while also teaching the robot how to deal with all kinds of terrain, from stage to Swiss forests. If you are lucky, you might spot these robots in Disneyland.
Scaling up learning across many different robot types
Researchers from Google DeepMind released a new set of resources for general-purpose robotics learning across different robot types, or embodiments. The first one is the Open X-Embodiment dataset for training robots. Researchers say it is the most comprehensive robotics dataset of its kind, demonstrating more than 500 skills and 150,000 tasks across more than 1 million episodes. They hope the Open X-Embodiment dataset will do the same for robotics as ImageNet did for computer vision. The second released resource is RT-X, a transformer-based model designed to control robots. RT-X model has been tested on 22 different robots from 33 academic labs and was successful in controlling all 22 robots, showing skills transfer across many robots.
This Robot Could Be the Key to Helping People With Disabilities
Stretch from Hello Robot is a clever and simple robot designed to be a helper for people with disabilities. It features a single extendable arm that moves up and down on a pole, mounted on a mobile platform. Stretch is capable of performing basic tasks autonomously, like grasping objects and moving from room to room. But the price tag of $20,000 is too expensive for many people. “We’re going to keep iterating to make Stretch more affordable,” says Hello Robot’s Charlie Kemp. “We want to make robots for the home that can be used by everyone, and we know that affordability is a requirement for most homes.”
🧬 Biotechnology
New CRISPR system is 66% smaller but just as powerful
A team from the University of Tokyo have found a new CRISPR-Cas enzyme that is much smaller than the original CRISPR-Cas9 system, which could make it a lot easier to treat diseases inside the human body. Unlike some other attempts to shrink down CRISPR, this version works about as well as its bigger predecessor, potentially unlocking big advances in gene therapy.
These CRISPR-Engineered Super Chickens Are Resistant to Bird Flu
Researchers from the UK have created genetically engineered chickens resistant to bird flu. “This showed us a proof of concept that we can move towards making chickens resistant to the virus,” study author Dr Wendy Barclay at Imperial College London said in a press conference. “But we’re not there yet.” Despite the genetic boost, half of the edited birds got sick when challenged with a large dose of the virus. There are also concerns that the edits can cause the virus to evolve faster, gaining mutations that can make it a better spreader and maybe hop onto humans.
Thanks for reading. If you enjoy this post, please ❤️ or share it.
H+ Weekly sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!