A peek into Apple Intelligence - Weekly News Roundup - Issue #478
Plus: EU AI Act is in force now; a titanium heart pumps blood inside a living human; an AI necklace to combat loneliness; autonomous cars drifting in tandem; and more!
Hello and welcome to Weekly News Roundup Issue #478!
This week, we’ll take a peek into Apple Intelligence as Apple has released a beta version of iOS 18 containing some Apple Intelligence features. Additionally, Apple published a paper giving us more information about Apple Foundation Models.
In other news, the EU AI Act is now in force, and another company is trying its luck with an AI-powered wearable. Meanwhile, OpenAI has released Advanced Voice Mode for some users and promises to give the U.S. AI Safety Institute early access to its next model.
In robotics, another company showcases its humanoid robots, and researchers have made two autonomous cars drift in tandem.
Additionally, a titanium heart is now pumping blood inside a living human, and Synchron’s brain implant lets people control Apple’s Vision Pro with their minds.
Enjoy!
A peek into Apple Intelligence
The release of Apple Intelligence is still weeks and months away, but the recent beta release of the new iOS and a paper describing Apple Foundation Models give us a little peek into what Apple is bringing to the table. With the iOS 18.1 developer beta, Apple is bringing some Apple Intelligence features out into the wild. Users who have signed up for the iOS 18 developer beta can now download the new system onto their phones and get a first taste of Apple Intelligence.
According to reports by The Verge and TechCrunch, the new update brings some updates to Siri, including new glowing edges to indicate that Siri is listening and the ability to interact with Siri via text. The update also introduces writing tools that allow users to proofread or rewrite text to make it more friendly, professional, or concise. These writing tools can also turn text into lists, summarise it, format it into a table, or highlight key points. Meanwhile, the Mail app can now summarise emails into one-line summaries and offers smart replies.
There is also a new Reduce Interruptions mode, which uses Apple Intelligence to surface only the most important notifications and mute everything else.
Alongside the iOS 18.1 dev release, Apple also published a paper in which the researchers and engineers from Cupertino revealed some details about Apple Foundation Models (AFM).
Before I highlight some interesting points from that paper, I just want to point out how uncharacteristic it is for Apple. Normally, Apple is very secretive about its products, but in this instance, Apple is surprisingly open and transparent about how the AFM models have been built. I see two reasons why this might be the case.
First, if we zoom out and take into account the entire field of AI, Apple isn't doing anything groundbreaking. The approach Apple has taken is more or less similar to what others in the industry have done. Second, Apple might not gain much from secrecy here. Apple Intelligence will not be the main product Apple sells. It is a feature, an incentive, for people to buy new iPhones, iPads or Macs. By being open about its foundation models, Apple can better explain how they are built, how they work, how they perform compared to other models, and how Apple is ensuring their safety and privacy.
Going back to the paper, it gives us some more details about both AFM models—AFM-on-device and AFM-server. AFM-on-device is a 3-billion-parameter model, the smallest in the AFM family, designed to run entirely on a device, as its name suggests. AFM-server is a larger model that Apple Intelligence will call when a request is too complex for the smaller on-device model to handle. The paper did not disclose the size of the AFM-server model.
The paper briefly describes the architecture of the models before diving into an explanation of the dataset used to train the AFM models. According to Apple, the dataset consists of a diverse and high-quality data mixture, including data licensed from publishers, curated publicly available or open-source datasets, and publicly available information crawled by Applebot, Apple’s web crawler. Apple made it clear that the crawler did not scrape websites that opted out of being crawled.
As for the specialised datasets, the paper explained that the code used to train AFM models was obtained from open-source repositories on GitHub, where the licenses permitted such use.
Additionally, Apple has removed any personally identifiable information, profanity, and unsafe material from the dataset. However, the exact dataset was not made publicly available.
Another interesting detail buried in the paper is the mention of using Google’s TPUs in the early phases of the development of AFM models. Apple was using Google’s custom v4 and v5p Cloud TPUs (Tensor Processing Units) in the pre-training and training phases. This detail explains the rumours floating around at the beginning of the year, suggesting that what was then called “Apple GPT” was running on Google Cloud. The decision to use Google’s hardware also suggests that some Big Tech companies are open to exploring and finding alternatives to Nvidia’s GPUs for AI training.
The next section worth taking a look at is the benchmarks. Some of these were shared back in June, and the paper adds even more. The picture Apple paints with these benchmark results is that both AFM-on-device and AFM-on-server are capable models, comparable with similar models in their respective classes. AFM-on-device is comparable to models like Llama 3 8B, Phi-3-mini, or Mistra-7B, while AFM-on-server competes with models such as GPT-4 and Gemini 1.5 Pro.
The last section of the paper covers Apple’s approach to Responsible AI, where the company outlines what it is doing to ensure the safety of its AFM models. In addition to the previously mentioned removal of personal data, profanity, and other unsafe content, Apple highlights additional methods it has employed to enhance the safety of its models. As a result, according to the paper, AFM models are significantly less likely to generate harmful or misleading content compared to their competitors. Additionally, AFM models are also preferred in benchmarks involving human evaluation.
The overall picture that emerges from that paper is that AFM models are looking good. However, I advise treating any paper describing a new AI model, especially those coming from the Big Tech companies, as a marketing campaign dressed as academic work. Even though the paper revealed some new information about the models powering Apple Intelligence, it wasn’t peer-reviewed. We have to wait until Apple Intelligence and AFM models are released to confirm what Apple claims in the paper is true.
And we may have to wait a bit longer to experience Apple Intelligence. According to recent reports, Apple Intelligence requires more time to fix bugs. Apparently, Apple’s plan is to first release the new versions of iOS, iPadOS, and macOS without Apple Intelligence features, which will be released as a separate update a couple of weeks later, sometime in October. However, even that update won’t have all the features and the full release of Apple Intelligence is expected to arrive in the first half of 2025.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
Seventh patient ‘cured’ of HIV: why scientists are excited
A 60-year-old man in Germany has become at least the seventh person with HIV to be announced free of the virus after receiving a stem-cell transplant. The man, who has been virus-free for close to six years, is only the second person to receive stem cells that are not resistant to the virus. Known as the "next Berlin patient," he received stem cells with only one copy of a mutated CCR5 gene, which HIV typically uses to enter cells. This case challenges the notion that curing HIV is solely about targeting CCR5 and broadens the potential donor pool for such transplants. This finding could have significant implications for future HIV treatments, including gene-editing therapies like CRISPR.
Maglev titanium heart now whirs inside the chest of a live patient
For the first time, a fully mechanical heart has been implanted inside a human being. The roughly fist-sized device uses a magnetically levitated rotor that pumps blood and replaces both ventricles of a failing heart. This artificial heart is able to push blood at a rate of 12 litres per minute, which is enough to allow an adult male to engage in exercise. However, the heart is not designed to be permanently implanted in a human—its role is to keep the patient alive while they wait for a heart transplant.
Neuralink rival Synchron’s brain implant now lets people control Apple’s Vision Pro with their minds
Neurotech startup Synchron announced it has connected its brain implant to Apple’s Vision Pro headset, allowing patients with limited mobility to control the device using only their thoughts. Synchron CEO Thomas Oxley said he believes Apple’s iOS accessibility platform is best in class, which is why the company has initially focused on helping patients control devices within Apple’s ecosystem. He added that Synchron will likely work to connect its BCI to other headsets, but it’s starting with the Vision Pro.
🧠 Artificial Intelligence
The EU’s AI Act is now in force
As of yesterday, August 1st, 2024, the EU AI Act is in force. This starts the clock on a series of staggered compliance deadlines that will apply to different types of AI developers and applications. Most provisions will not be fully applicable until mid-2026. The first deadline, which enforces bans on a small number of prohibited uses of AI in specific contexts, such as law enforcement's use of remote biometrics in public places, will apply in just six months' time.
Friend’s $99 necklace uses AI to help combat loneliness
After the disaster that was the Humane AI Pin and Rabbit R1, another startup is trying its luck with wearable AI devices. Friend is a necklace that also serves as an AI companion, designed to help people combat loneliness. The device constantly listens to its wearer and can proactively send messages, like wishing good luck. It does not offer any functionalities to improve productivity and does not act as an AI assistant. It’s simply an AI friend you can talk to and nothing more. The company said it will start taking preorders for its basic white version, priced at $99 and expected to ship in January 2025.
Character.AI Co-Founders Hired by Google in Licensing Deal
Another AI company has fallen victim to Big Tech’s new playbook of “acquiring” startups. After Microsoft and Amazon effectively absorbed Inflection AI and Adept, respectively, it is now Google’s turn to absorb Character.AI, a startup known for chatbots that can mimic anyone or anything. Similarly to other such deals, the founders of Character.AI, along with some employees, will join Google. Character.AI, meanwhile, will enter into a non-exclusive licensing deal with Google for its large language model technology and will continue to exist, albeit in a different form.
Instagram Will Let You Make Custom AI Chatbots—Even Ones Based on Yourself
Meta is launching AI Studio, a tool that allows users to create virtual characters with custom personalities, traits, and interests, even based on their own personalities. For example, influencers can use these virtual characters to set up bots that engage with their followers in DMs. Initially available to Instagram Business users, AI Studio will soon roll out to all Meta users in the US. Later, the tool will be expanded to WhatsApp and Facebook.
OpenAI releases ChatGPT’s hyper-realistic voice to some paying users
OpenAI has started to roll out ChatGPT’s Advanced Voice Mode (the same feature that used a voice similar to Scarlett Johansson’s in its demos) to some ChatGPT Plus users, with plans to gradually enable the feature for all Plus users later this year.
OpenAI pledges to give U.S. AI Safety Institute early access to its next model
OpenAI CEO Sam Altman announced a partnership with the U.S. AI Safety Institute to provide early access to its next AI model for safety testing. This move aims to address concerns that OpenAI has deprioritized AI safety, particularly after disbanding a team focused on preventing "superintelligent" AI risks. Despite pledges to dedicate 20% of resources to safety research and policy changes, scepticism remains. The timing of this partnership, alongside OpenAI's support for a Senate bill establishing the Safety Institute, has raised concerns about potential influence over AI regulations.
Introducing GitHub Models: A new generation of AI engineers building on GitHub
GitHub is launching GitHub Models, "a playground for building with AI." This new tool will offer AI developers convenient access to a variety of AI models, such as Llama 3.1, GPT-4o, GPT-4o mini, Phi 3, and Mistral Large 2, allowing them to test different prompts and model parameters. From there, GitHub Models helps integrate the models and their parameters into the app and deploy it.
Nvidia’s new Titan GPU will beat the RTX 5090, according to leak
According to leaks, Nvidia could bring back the Titan cards with the release of the new, Blackwell-based RTX line of GPUs. If this turns out to be true, the new Titan card will be aimed at AI developers, offering more power than the upcoming top-of-the-line RTX 5090 gaming/semi-professional card and leaving in the dust RTX 4090.
Microsoft says OpenAI is now a competitor in AI and search
In its annual report to the SEC, Microsoft added OpenAI to the list of its competitors in AI and search, complicating the relationship between the two companies. Microsoft has reportedly invested $13 billion into OpenAI and is its biggest partner, offering cloud services for OpenAI and integrating OpenAI models into its products. However, OpenAI remains an independent company, and with the announcement of SearchGPT, it introduces a new competitor to search engines, including Microsoft’s Bing.
Ferrari exec foils deepfake attempt by asking the scammer a question only CEO Benedetto Vigna could answer
Scams in which attackers use very convincing deepfakes to impersonate high-profile executives are becoming more frequent. Earlier this year, a company from Hong Kong was the target of such an attack and lost $26 million. Recently, scammers targeted Ferrari, but the scam was foiled when the executive asked the impersonator a question that only the real Ferrari CEO would know the answer to.
Deepfake Porn Is Leading to a New Protection Industry
The rise of deepfake technology has made it alarmingly easy to create fake pornographic content. The issue is prompting action from women-led startups like That’sMyFace and Alecto AI, which aim to use AI to combat AI by helping individuals detect and remove deepfake content. However, this is an uphill battle, as current deepfake detectors struggle to keep up with the rapid development of generation tools. Legal frameworks are also inconsistent, with varying levels of protection across different jurisdictions, prompting calls for stronger and more unified regulations.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
A Robot Dentist Might Be a Good Idea, Actually
By combining breakthroughs in advanced imaging, AI, and robotics, Perceptive has built a robot dentist. The company promises its robot will make dental procedures quicker and more accessible, especially in places lacking dentistry services, such as rural areas. Although Perceptive has successfully tested its first-generation system on humans, it’s not yet ready for commercialization. The robot needs to get a green light from the FDA, and if that goes well, the company estimates that it could be available to the public in “several years.”
▶️ TRI / Stanford Engineering Autonomous Tandem Drift (3:34)
This video shows two cars drifting together in tandem on a racetrack somewhere in California. However, what is unusual about these two cars is that they are both autonomous. A team from Stanford Engineering and the Toyota Research Institute (TRI) modified two Toyota GR Supras to perform tandem drifting, where the lead vehicle focuses on tracking an ideal path while the chase vehicle has to stay close, all while drifting and avoiding any collisions. Researchers hope that the lessons learned from this project will be useful in making autonomous road vehicles safer. For more information about this project, check out this article from TRI.
Neura shows off humanoid robot 4NE-1
Another company has joined the humanoid robots scene. The German robotics manufacturer Neura released a video showcasing what their humanoid robot, named 4NE-1, can do. However, this is more of a promotional video announcing a partnership with Nvidia, rather than raw footage of what the robot can actually do in real time.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"