With Apple spending a lot of money on generative AI and machine learning models, is it time for us to start prepping for Siri 2.0? 

The Information says Apple has “significantly” increased spending on AI development focused on genAI capabilities within Siri. The report suggests Apple’s internal AI research thrust is pushing in three key directions:

It feels as though Apple may be slightly stung by criticism of its AI achievements so far. With this in mind, it wants to:

While it’s true that ChatGPT caught most everyone by surprise, Apple seemed most left behind once that chatbot appeared. The iPhone maker appears to have put these developments on the fast track and may even have these features ready to roll within iOS 18, the report claimed. Work is being led by a new 16-member team of engineers building the “Foundational Model” LLM the company will use to build models, at a cost of millions of dollars each day.

Building powerful LLM-based models in Siri may be complicated a little by the company’s dedication to customer privacy. That implies that whatever models it does deploy will primarily use features that already exist on its devices. That’s where better integration with Shortcuts makes sense, though the company may not be completely reliant on that. Why? Because every Apple chip also carries a Neural Engine — a dedicated space on the chip to handle machine intelligence tasks.

The problem is that Apple’s existing LLMs are quite large, which means they would be difficult to carry and run on the device. That limitation suggests the company might develop highly focused automations that can work well on device in certain domains, and used in conjunction with cloud-based systems for more complex tasks; this might undermine Apple’s environmental work, given the energy and water such machines devour.

Will Apple’s teams be able to usefully figure out how to intelligently harness the user behavior data each device holds while keeping it  private? Is it even possible to build AI models that can privately use data by processing on the device?

The company already does this to some extent — reminders of when to travel to get to meetings on time, recognition of incoming communications from key contacts, or even health-related personal fitness trends, for example.

Not every solution needs to be on-device, either. The Information suggests an Apple Care AI assistant may be in development, which would essentially help triage user queries to direct them to solutions they can follow for themselves or direct them to relevant human support operatives.

But Apple will want to do more and ensure whatever AI tools it does deploy are actually useful to its customers, and work well when handled totally on the device. Whatever it builds will be designed to support the user experience, rather than exploit user data.

These major investments in the Ajax LLM and also in visual intelligence and multimodal AI, seem to be on the fast track. This suggests Siri 2.0 may be one of the big themes at WWDC 2024. Together, it will be interesting to see the extent to which these tools evolve to support the "Spatial Computing" visionOS platform the company is building.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

IT World