WWDC24 Apple Intelligence presentation

What is Apple Intelligence, when is it coming and who will get it?


After months of speculation, Apple Intelligence took center stage at WWDC 2024 in June. The platform was announced in the wake of a torrent of generative news from companies like Google and Open AI, causing concern that the famously tight-lipped tech giant had missed the boat on the latest tech craze.

Contrary to such speculation, however, Apple had a team in place, working on what proved to be a very Apple approach to artificial intelligence. There was still pizzazz amid the demos — Apple always loves to put on a show — but Apple Intelligence is ultimately a very pragmatic take on the category.

Apple Intelligence (yes, AI for short) isn’t a standalone feature. Rather, it’s about integrating into existing offerings. While it is a branding exercise in a very real sense, the large language model (LLM) driven technology will operate behind the scenes. As far as the consumer is concerned, the technology will mostly present itself in the form of new features for existing apps.

We learned more during the Apple’s iPhone 16 event, which was held on September 9. During the event, Apple touted a number of AI-powered features coming to their devices, from translation on the Apple Watch Series 10, visual search on iPhones and a number of tweaks to Siri’s capabilities. The first wave of Apple Intelligence is arriving at the end of October, as part of iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1. A second wave of features are available as part of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2 developer betas.

The features launched first in U.S. English. Apple has since added Australian, Canadian, New Zealand, South African, and U.K. English localizations.

Support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese will arrive in 2025. Notably, users in both China and the EU may not get any access to Apple Intelligence features, owing to regulatory hurdles.

What is Apple Intelligence?

wwdc24 Apple intelligence AI for the rest of us e1718051510774
Image Credits:Apple

Cupertino marketing executives have branded Apple Intelligence: “AI for the rest of us.” The platform is designed to leverage the things that generative AI already does well, like text and image generation, to improve upon existing features. Like other platforms including ChatGPT and Google Gemini, Apple Intelligence was trained on large information models. These systems use deep learning to form connections, whether it be text, images, video or music.

The text offering, powered by LLM, presents itself as Writing Tools. The feature is available across various Apple apps, including Mail, Messages, Pages and Notifications. It can be used to provide summaries of long text, proofread and even write messages for you, using content and tone prompts.

Image generation has been integrated as well, in similar fashion — albeit a bit less seamlessly. Users can prompt Apple Intelligence to generate custom emojis (Genmojis) in an Apple house style. Image Playground, meanwhile, is a standalone image generation app that utilizes prompts to create visual content than can be used in Messages, Keynote or shared via social media.

Apple Intelligence also marks a long-awaited face-lift for Siri. The smart assistant was early to the game, but has mostly been neglected for the past several years. Siri is integrated much more deeply into Apple’s operating systems; for instance, instead of the familiar icon, users will see a glowing light around the edge of their iPhone screen when it’s doing its thing.

More important, new Siri works across apps. That means, for example, that you can ask Siri to edit a photo and then insert it directly into a text message. It’s a frictionless experience the assistant had previously lacked. Onscreen awareness means Siri uses the context of the content you’re currently engaged with to provide an appropriate answer.

Who gets Apple Intelligence and when?

iPhone 15 Pro Max in natural titanium, being held, showing the back of the phone
Image Credits:Darrell Etherington

The first wave of Apple Intelligence arrives in October via iOS 18.1, iPadOS 18., and macOS Sequoia 15.1 updates. These include integrated writing tools, image cleanup, article summaries, and a typing input for the redesigned Siri experience.

Many remaining features will be added with the forthcoming release of of October, as part of iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1. A second wave of features are available as part of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2. That list includes, Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integration.

The offering will be free to use, so long as you have one of the following pieces of hardware:

  • All iPhone 16 models
  • iPhone 15 Pro Max (A17 Pro)
  • iPhone 15 Pro (A17 Pro)
  • iPad Pro (M1 and later)
  • iPad Air (M1 and later)
  • iPad mini (A17 or later)
  • MacBook Air (M1 and later)
  • MacBook Pro (M1 and later)
  • iMac (M1 and later)
  • Mac mini (M1 and later)
  • Mac Studio (M1 Max and later)
  • Mac Pro (M2 Ultra)

Notably, only the Pro versions of the iPhone 15 are getting access, owing to shortcomings on the standard model’s chipset. Presumably, however, the whole iPhone 16 line will be able to run Apple Intelligence when it arrives.

Private Cloud Compute

wwdc24 apple intelligence private cloud compute 02
Image Credits:Apple

Apple has taken a small-model, bespoke approach to training. Rather than relying on the kind of kitchen sink approach that fuels platforms like GPT and Gemini, the company has compiled datasets in-house for specific tasks like, say, composing an email. The biggest benefit of this approach is that many of these tasks become far less resource intensive and can be performed on-device.

That doesn’t apply to everything, however. More complex queries will utilize the new Private Cloud Compute offering. The company now operates remote servers running on Apple Silicon, which it claims allows it to offer the same level of privacy as its consumer devices. Whether an action is being performed locally or via the cloud will be invisible to the user, unless their device is offline, at which point remote queries will toss up an error.

Apple Intelligence with third-party apps

OpenAI and ChatGPT logos
Image Credits:Didem Mente/Anadolu Agency / Getty Images

A lot was made about Apple’s pending partnership with OpenAI ahead of WWDC. Ultimately, however, it turned out that the deal was less about powering Apple Intelligence and more about offering an alternative platform for those things it’s not really built for. It’s a tacit acknowledgement that building a small-model system has its limitation.

Apple Intelligence is free. So, too, is access to ChatGPT. However, those with paid accounts to the latter will have access to premium features free users don’t, including unlimited queries.

ChatGPT integration, which debuts on iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, has two primary roles: supplementing Siri’s knowledge base and adding to the existing Writing Tools options.

 With the service enabled, certain questions will prompt the new Siri to ask the user to approve its accessing ChatGPT. Recipes and travel planning are examples of questions that may surface the option. Users can also directly prompt Siri to “ask ChatGPT.”

Compose is the other primary ChatGPT feature available through Apple Intelligence. Users can access it in any app that supports the new Writing Tools feature. Compose adds the ability to write content based on a prompt. That joins existing writing tools like Style and Summary.

We know for sure that Apple plans to partner with additional generative AI services. The company all but said that Google Gemini is next on that list.



Source link

Podobné příspěvky