Fake AI Biden and other ways AI is disrupting politics - Weekly News Roundup - Issue #451
Plus: gene therapy restores hearing in 11-year-old; AlphaFold found thousands of possible psychedelics; RIP Ingenuity helicopter; cat-shaped pizza delivery robot; and more!
Welcome to Weekly News Roundup Issue #451. This week’s main story is covering the recent instances of AI used to spread political misinformation like the fake Joe Biden robocall.
In other news, an 11-year-old has regained hearing thanks to gene therapy and AlphaFold has discovered thousands of potential new psychedelics. Meanwhile, DPD’s customer service chatbot experiences a significant malfunction, San Francisco files a lawsuit to reduce the number of robotaxis in the city, and more!
This week, people of New Hampshire have received a phone call from US President Joe Biden himself advising them against voting in Tuesday’s presidential primary and saving their vote for the November general election. But that was not the real Joe Biden. It was a fake Biden, impersonated by an AI.
That fake Biden was not the only instance of AI used to spread misinformation this month. Recently, OpenAI shut down an account that used a chatbot to impersonate Democratic presidential candidate Dean Phillips. In New York, Manhattan Democratic boss Keith Wright’s deepfaked voice saying misogynist and disrespectful things went mini-viral. AI-generated fake content was playing a big role in the recent elections in Bangladesh. What we have witnesses so far is only the beginning. I’m sure there are more examples of faked political content published only this month and way more on their way as 2024 unfolds.
These incidents, as well as other like fake images of an attack on the Pentagon and misleading content about Israel-Hamas conflict, have shown the real-world impacts of AI-generated disinformation. Experts express particular concern about audio fakes, which are harder to detect than visual fakes, especially over phone calls. They worry about the reach and impact of such fakes on susceptible and less tech-savvy demographics, like older people.
There are efforts underway to either detect or label AI-generated content. Images generated by text-to-image generators can be watermarked using invisible for human eye patterns embedded into the image that a computer can quickly detect. One such example is Google DeepMind’s SynthID, with which all images generated by Imagen 2 (which will include images generated by Gemini, too) will be labelled. Similar watermarks can also be applied to AI-generated audio clips and videos.
Despite efforts to watermark and disclose AI involvement in content creation, enforcement and effectiveness remain challenging. The upcoming EU AI Act will require companies offering all kinds of AI generators to label what their systems produce as AI-generated content and to design systems in such a way that AI-generated media can be detected. Similar regulations are also proposed in China. The US, meanwhile, does not have an equivalent of these regulations yet. The US only has a commitment made by top US AI companies to the White House to develop tools to watermark and detect AI-generated content.
The detectors can help spot AI-generated content but they won’t prevent from publishing it. Technology can take us only this far. Beyond that, we are left to rely on our vigilance and our willingness to double-check with reputable and diverse sources of information if what we see or hear is true. However, that approach can lead to losing trust in everything we see or hear online. Every article, every photo, every TikTok, every YouTube video is potentially fake. But that is unfortunately the reality we live in. We must address this challenge with a multi-faceted strategy: embracing technological solutions for detecting fake content, fostering legislative action to hold tech companies accountable, and cultivating media literacy among the public.
We are in the year of elections, where more than 40 countries are set to hold national elections in 2024, including Brazil, India, Indonesia, Mexico, Pakistan, Russia, the United States, the United Kingdom and Ukraine. In total, over 3 billion people will choose their next governments and leaders this year. AI-generated misinformation will play a significant role in all those elections. Vigilance, critical thinking, and a commitment to ethical AI practices must be at the forefront, as we live in a time where the lines between what is real and what is not are becoming increasingly blurred.
If you enjoy this post, please click the ❤️ button or share it.
I warmly welcome all new subscribers to the newsletter this week. I’m happy to have you here and I hope you’ll enjoy my work. A heartfelt thank you goes to everyone who joined as paid subscribers this week.
The best way to support the Humanity Redefined newsletter is by becoming a paid subscriber.
If you enjoy and find value in my writing, please hit the like button and share your thoughts in the comments. Additionally, please consider sharing this newsletter with others who might also find it valuable.
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
One last thing before we jump into this week’s news. I did a podcast with
where we discussed synthetic biology, mind-reading AI devices and whether or not a merger between AI and human intelligence is "inevitable" as some claim. The podcast is available here.Tobias is writing
newsletter where he covers tech and AI from a lawyer’s perspective. I recommend checking out and subscribing to his newsletter if the legal side of tech is something that interests you.🦾 More than a human
Gene Therapy Restores Hearing in 11-Year-Old After Just One Month
Akouos has announced positive initial results from their gene therapy study for hearing loss in which an 11-year-old with profound congenital hearing loss regained hearing within 30 days post-treatment. This case marks the first use of gene therapy for genetic hearing loss in the US.
Can autoimmune diseases be cured? Scientists see hope at last
For over 50 years, researchers have been trying to find a way to tame the cells that are responsible for autoimmune disorders such as type 1 diabetes, lupus and multiple sclerosis. Recent results suggest that curative treatments may be within reach. Various approaches are being tested, including the use of antigens to reprogram rogue immune cells, selectively eliminating problematic cells, and introducing engineered suppressive immune cells. These emerging treatments, which differ from traditional methods that suppress the entire immune response, offer hope for more effective and targeted solutions.
🧠 Artificial Intelligence
AlphaFold found thousands of possible psychedelics
Using DeepMind’s AlphaFold, researchers were able to discover hundreds of thousands of possible new psychedelics. Although their focus was on finding new antidepressant drugs, I can imagine someone using AI-powered drug discovery software to create AI-generated recreational drugs in the future. Those cyberdrugs from cyberpunk stories may actually become a reality one day.
DPD AI chatbot swears, calls itself ‘useless’ and criticises delivery firm
DPD disabled a section of its AI chatbot after a frustrated customer made it swear and mock the company while trying to find a missing parcel. The conversation with the chatbot was then posted on X where it gained the attention of thousands of people. DPD said it is working on a fix. This situation is another example of how a poorly implemented AI in customer service can impact both the customer and the company.
Trolls have flooded X with graphic Taylor Swift AI fakes
Sexually explicit AI-generated images of Taylor Swift have been circulating on X (previously known as Twitter), highlighting the growing issue of AI-generated fake pornography and the difficulty in controlling its spread. This incident underscores the significant challenge in stopping deepfake pornography involving both celebrities and non-celebrities, as such content can have devastating impacts on individuals' careers and lives.
The winner of a prestigious Japanese literary award has confirmed AI helped write her book
Japanese author Rie Kudan sparked controversy after revealing that about 5% of the award-winning book was written by ChatGPT. Despite some critics labelling her approach as “disrespectful” to authors who write without the assistance of AI, Kudan plans to continue using AI tools in her writing, saying they allow her creativity to express itself to the fullest. The prize committee also does not see Kudan’s use of AI as a problem.
Nightshade, an offensive tool to “poison” generative AI models
If you are an artist and you not only want to protect your work from being used to train AI models without your permission but also want to inflict damage on them, then Nightshade might be for you. Nightshade modifies an image in such a way that is invisible to the human eye but makes the AI see something completely different (for example, a cow flying in space is seen by an AI as a handbag floating in space). The team behind Nightshade hopes their tool will give artists a way to fight back against model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives.
▶️ MSI has an AI monitor that’s practically cheating (6:43)
This year’s CES has seen a new crop of devices with built-in AI of various sizes and shapes but I don’t think anyone expected to see a gaming monitor that allows cheating in League of Legends. Because that’s exactly what MSI brought to the show. MSI claims their AI technology isn't technically cheating but players beg to differ.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
San Francisco files lawsuit to pump brakes on robotaxis
San Francisco is suing the California Public Utilities Commission (CPUC) to reduce the number of robotaxis operating in the city. The city, citing hundreds of safety incidents involving driverless vehicles, including interference with first responders, is challenging the CPUC's decision for potential non-compliance with the law. The lawsuit also seeks stricter safety regulations and reporting requirements for autonomous vehicles.
After Three Years on Mars, NASA’s Ingenuity Helicopter Mission Ends
“The historic journey of Ingenuity, the first aircraft on another planet, has come to end,” announced NASA Administrator Bill Nelson. After completing 72 flights over nearly three years on the Martian surface, this robotic helicopter sustained damage to its rotor blades during a landing, and it is no longer capable of flight.
That Awesome Robot Demo Could Have a Human in the Loop
This article from IEEE Spectrum highlights a misconception in recent viral videos of robots like Stanford's Mobile ALOHA and Tesla’s Optimus robot, where robots appeared to be doing impressive things but were actually controlled by humans through teleoperation, misleading the viewers about the robots' autonomy. The article emphasizes the importance of providing clear context in robot videos so that viewers have a better understanding of the robots' true capabilities and how they were achieved.
Cat-shaped pizza delivery robot
Welcome to a future where a cat-shaped robot delivers your pizza, all the while singing a song about a pizzeria run by cats.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!