Imagine running fleets of iPhones that alert you when unexpected security-related incidents take place, or when otherwise legitimate service requests arrive from devices at an unexpected time or location. Imagine management and security software that not only identified these kinds of anomalies but gave you useful advice to help remediate the problem.

This, and more, is the kind of protection Jamf hopes to deliver using generative AI tools. 

Jamf believes generative AI can be a big benefit to tech support and IT admin, and talked about its efforts at the end of an extensive Jamf Nation User Conference (JNUC) keynote. Akash Kamath, the company's senior vice president, engineering, explained that just as the Mac made computing personal, genAI makes AI personal.

“It's impossible to ignore the seismic shifts AI is causing across industries,” Kamath said during the keynote, noting that McKinsey claims generative AI could help automate nearly one-third of all work hours by 2030.

“That's a game-changer,” said Kamath, “but let's not forget the cautionary examples like ChatGPT's ‘hallucinations,’ which can lead to misinformation. This is particularly alarming for us at Jamf, where we believe that 'Trusted Access' isn't just a tagline — it's a commitment.”

Kamath explained a little concerning Ask Jamf, a genAI project his team is developing that uses a specialized large language model (LLM) trained with proprietary data obtained from Jamf Pro documentation and the company’s user forums.

The concept here is that an IT admin can get help building and scheduling policies,developing tutorials to build specific extensions and for other tasks. The project, now in early development, is being tested by admins at the show.

“Our aspirations don't end with a language model,” said Kamath. “The ultimate goal of AI integration into Jamf products is to function as a silent partner that amplifies your effectiveness.”

With this in mind, the company has built Hypothesis, a security assistant for IT. “This function acts like an additional member of your security team…, helping you separate the critical from the benign,” he explained.

The tool analyzes security events and their associated telemetry data in real time to give IT better insights into what happened and better advice on what to do next.

News that Jamf is working with genAI for admin and security follows a similar announcement from Google at the RSA Conference 2023. That’s where the company announced Google Cloud Security AI Workbench.

Just as with Jamf’s approach, Google’s LLMs rely on security intel gathered by the company and its partners. These intelligent security machines should deliver fast alerts to any threat indicators, offer up access to remediation tools, and empower teams to secure systems more swiftly.

“Advances in generative AI can help reduce the number of tools organizations need to secure their vast attack surface areas and ultimately, empower systems to secure themselves,” Google explains.

These solutions are very much works in progress, but you don’t need a weatherman to see the way in which this wind blows: Generative AI will become an essential part of every IT admin’s tool kit across a multitude of enterprise tools.

The hope is that these layers of automated intelligence will protect company endpoints, central systems, and data while enabling tech teams to optimize what they do by using this kind of tech to handle routine tasks and help them meet complex integration challenges. The devices are learning to protect themselves.

Update: You can learn more concerning how Jamf thinks about weaving GAI into its products and services in this Q&A with CTO, Beth Tschida, here.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

IT World