Apple boosts plans to bring generative AI to iPhones

Capture investment opportunities created by megatrends

Apple boosts plans to bring generative AI to iPhones

24 January 2024 Technology & Digitalization 0

Apple is quietly increasing its capabilities in artificial intelligence, making a series of acquisitions, staff hires and hardware updates that are designed to bring AI to its next generation of iPhones.

Industry data and academic papers, as well as insights from tech sector insiders, suggest the Californian company has focused most attention on tackling the technological problem of running AI through mobile devices.

The iPhone maker has been more active than rival Big Tech companies in buying AI start-ups, acquiring 21 since the beginning of 2017, research from PitchBook shows. The most recent of those acquisitions was its purchase in early 2023 of California-based start-up WaveOne, which offers AI-powered video compression.

“They are getting ready to do some significant M&A,” said Daniel Ives at Wedbush Securities. “I’d be shocked if they don’t do a sizeable AI deal this year, because there’s an AI arms race going on, and Apple is not going to be on the outside looking in.”

According to a recent research note from Morgan Stanley, almost half of Apple’s AI job postings now include the term “Deep Learning”, which relates to the algorithms powering generative AI — models that can spew out humanlike text, audio and code in seconds. The company hired Google’s top AI executive, John Giannandrea, in 2018.

Apple has been typically secretive about its AI plans even as Big Tech rivals, such as Microsoft, Google and Amazon, tout multibillion-dollar investments in the cutting-edge technology. But according to industry insiders, the company is working on its own large language models — the technology that powers generative AI products, such as OpenAI’s ChatGPT.

Chief executive Tim Cook told analysts last summer that it “has been doing research across a wide range of AI technologies” and investing and innovating “responsibly” when it comes to the new technology.

Apple’s goal appears to be operating generative AI through mobile devices, which would allow AI chatbots and apps to run on the phone’s own hardware and software rather than be powered by cloud services in data centres.

That technological challenge requires reductions in the size of the large language models that power AI, as well as higher-performance processors.

Other device makers have moved faster than Apple, with both Samsung and Google releasing new devices that claim to run generative AI features through the phone.

Apple’s Worldwide Developers Conference, usually held in June, is widely expected to be the event where the company reveals its latest operating system, iOS 18. Morgan Stanley analysts expect the mobile software will be geared towards enabling generative AI and could include its voice assistant Siri being powered by an LLM.

“They tend to hang back and wait until there is a confluence of technology, and they can offer one of the finest representations of that technology,” said Igor Jablokov, chief executive of AI enterprise group Pryon and founder of Yap, a voice recognition company that was acquired by Amazon in 2011 to feed into its Alexa and Echo products. 

Apple has also unveiled new chips, which have greater capabilities to run generative AI. The company said its M3 Max processor for the MacBook, revealed in October, “unlocks workflows previously not possible on a laptop”, such as AI developers working with billions of data parameters.

The S9 chip for new versions of the Apple Watch, unveiled in September, allows Siri to access and log data without connecting to the internet. And the A17 Pro chip in the iPhone 15, also announced at the same time, has a neural engine that the company says is twice as fast as previous generations.

“As far as the chips in their devices, they are definitely being more and more geared towards AI going forward from a design and architecture standpoint,” said Dylan Patel, an analyst at semiconductor consulting firm SemiAnalysis.

Apple researchers published a paper in December announcing that they had made a breakthrough in running LLMs on-device by using Flash memory, meaning queries can be processed faster, even offline.

In October, it released an open-source LLM in partnership with Columbia University. “Ferret” is currently limited to research purposes and effectively acts as a second pair of eyes, telling the user what they are looking at, including specific objects within the image.

“One of the problems of an LLM is that the only way of experiencing the world is through text,” said Amanda Stent, director of the Davis Institute for AI at Colby College. “That’s what makes Ferret so exciting: you can start to literally connect the language to the real world.” At this stage, however, the cost of running a single “inference” query of this kind would be huge, Stent said.

Such technology could be used, for example, as a virtual assistant that can tell the user what brand of shirt someone is wearing on a video call, and then order it through an app.

Microsoft recently overtook Apple as the world’s most valuable listed company, with investors excited by the software group’s moves in AI.

Still, Bank of America analysts last week upgraded their rating on Apple stock. Among other things, they cited expectations that the upgrade cycle for iPhones will be boosted by demand for new generative AI features to appear this year and in 2025.

Laura Martin, a senior analyst at Needham, the investment bank, said the company’s AI strategy would be “for the benefit of their Apple ecosystem and to protect their installed base”.

She added: “Apple doesn’t want to be in the business of what Google and Amazon want to do, which is to be the backbone of all American businesses that build apps on large language models.”