• We have known for some time that machine learning is important for palm rejection technology on the iPad when using a pencil.

    Apple

  • It is used for live translation in iOS 14.

    Apple

  • AI is behind the automatic positioning of home screen widgets.

    Apple

Machine learning (ML) and artificial intelligence (AI) are now pervading almost all functions of the iPhone, but Apple has not advertised these technologies like some of its competitors. I wanted to learn more about Apple's approach and spent an hour talking to two Apple executives about the company's strategy – and the impact of all new AI and ML-based features on privacy.

In the past, Apple had no public reputation as a leader in the field. This is partly because people associate AI with digital assistants and reviewers often refer to Siri as less useful than Google Assistant or Amazon Alexa. And with ML, many tech enthusiasts say that more data means better models – but Apple is not known for data collection like Google.

Despite this, Apple has integrated dedicated machine learning hardware into most of the devices it ships. Machine intelligence-driven functions increasingly dominate the keynotes in which Apple executives take the stage to introduce new functions for iPhones, iPads or the Apple Watch. The launch of Macs with Apple silicon later this year will bring many of the same developments in machine intelligence to the company's laptops and desktops.

After the Apple Silicon announcement, I spoke extensively to John Giannandrea, Apple's senior vice president of machine learning and AI strategy, and Bob Borchers, VP of Product Marketing. They described Apple's AI philosophy, explained how machine learning controls certain functions, and were passionate about Apple's AI / ML strategy on the device.

contents

What is Apple's AI strategy?

Both Giannandrea and Borchers have come to Apple in the past few years. everyone previously worked at Google. Borchers actually returned to Apple after a while. Until 2009 he was Senior Marketing Director for the iPhone. Giannandrea's departure from Google to Apple in 2018 has been widely reported. He had been Google's director of AI and search.

Google and Apple are very different companies. Google has a reputation for participating in and leading the AI ​​research community in some cases, while Apple has done most of its work behind closed doors. This has changed in recent years as machine learning supports multiple features in Apple's devices and Apple has increased its engagement with the AI ​​community.

"When I came to Apple, I was already an iPad user and I loved the pencil," said Giannandrea (who goes to colleagues with "J.G."). "So I would locate the software teams and say," Okay, where's the machine learning team working on the handwriting? "And I couldn't find it." It turned out that the team he was looking for didn't exist – a surprise, he said, since machine learning is one of the best tools available for this feature today.

“I knew Apple should learn so much machine learning that it was surprising that not everything was actually done. And that has changed dramatically in the past two to three years, ”he said. "I honestly think that there is no corner of iOS or Apple experience that machine learning won't change in the coming years."

I asked Giannandrea why he thought Apple was the right place. His answer was also a brief summary of the company's AI strategy:

I think Apple has always stood for this interface between creativity and technology. And I think when you think about building smart experiences, vertical integration from applications to frameworks to silicon is really important … I think it's a journey and I think this is the future of computing devices we have is that they are intelligent and that intelligent type is disappearing.

Borchers also intervened, adding, "This is clearly our approach to everything we do. We focus on the benefits, not how you got there." And at best, it becomes automagic. It disappears … and you only focus on what happened, as opposed to how it happened. "

When Giannandrea spoke again of the example of handwriting, he claimed that Apple is best positioned to be "industry leader" in the development of machine intelligence-based features and products:

We made the pencil, we made the iPad, we made the software for both. There are simply unique ways to do a really, really good job. What are we doing really, really well? Let someone take notes and be productive with their creative thoughts on digital paper. What interests me is to see how these experiences are used on a large scale in the world.

He contrasted this with Google. "Google is an amazing company and it has some really great technologists working there," he said. "But their business model is fundamentally different and they are not known for delivering customer experiences that are used by hundreds of millions of people."

How is Apple using machine learning today?

Apple has made it a habit to reward machine learning with enhancing some features of the iPhone, Apple Watch, or iPad in its recent marketing presentations, but it rarely goes into detail – and most people who buy an iPhone have it Never seen presentations anyway. Compare this to Google, for example, where AI is at the center of much of its messages to consumers.

There are numerous examples of machine learning in Apple's software and devices, most of which were new in recent years.

Using machine learning, iPad software can distinguish between a user who accidentally presses the palm of their hand against the screen while drawing with the Apple Pencil and a deliberate pressure that is intended to allow input. It is used to monitor user usage habits, optimize battery life and device charging, both to improve the time users can spend between charges and to protect the long-term viability of the battery. It is used to make app recommendations.

Then there is Siri, which is perhaps the only thing that any iPhone user would immediately perceive as artificial intelligence. Machine learning determines various aspects of Siri, from speech recognition to Siri's attempts to provide useful answers.

Experienced iPhone owners may also notice that machine learning supports the Photos app's ability to automatically sort images into pre-built galleries or give you photos of a friend named Jane exactly when her name is entered in the app's search box.

In other cases, few users may find that machine learning is at work. For example, your iPhone can take multiple pictures in quick succession every time you tap the shutter button. An ML-trained algorithm then analyzes each image and can summarize the best parts of each image into one result.

Enlarge /. AI is behind Apple's hand washing aid in the Apple Watch.

Sam Machkovech

Cell phones have long included digital signal processors (ISPs) to improve the quality of photos digitally and in real time. However, Apple accelerated the process in 2018 by working closely with the iPhone's ISP on the Neural Engine, the company's recently added machine learning feature. focused processor.

I asked Giannandrea to name some of the ways Apple is using machine learning in its latest software and products. He gave a laundry list with examples:

There is a whole host of new experiences that are being driven by machine learning. And these are things like voice translation or dictation on the device, or our new health-related features like sleep and hand washing, and things we have published in the past about heart health and things like that. I think there are fewer and fewer places in iOS where we don't use machine learning.

It is difficult to find a part of the experience in which you do not do forward-looking work. Like app predictions or keyboard predictions or modern smartphone cameras do a lot of machine learning behind the scenes to find out what they call "broadcasting". What is the most important part of the picture? If you imagine blurring the background, use portrait mode.

All of these things benefit from the core machine learning capabilities built into Apple's core platform. So it's almost like "Find me something where we don't use machine learning."

Borchers also highlighted accessibility features as important examples. "This makes them basically available and possible," he said. "Things like the noise detection feature, which is pioneering for this particular community, are possible due to the investment over time and the built-in features."

In addition, you may have noticed that Apple's software and hardware updates have highlighted augmented reality features in recent years. Most of these functions are made possible by machine learning. Via Giannandrea:

Machine learning is often used in augmented reality. The difficult problem there is the so-called SLAM, i.e. Simultaneous Localization And Mapping. So if you try to understand if you have an iPad with a lidar scanner and are moving, what does it look like? And create a 3D model of what it actually sees.

That uses deep learning today and you need to be able to do it on the device because you want to do it in real time. It wouldn't make sense if you waved your iPad around and might have to do it in the data center. In general, I would say that I think deep learning in particular gives us the ability to switch from raw data to semantics about that data.

Apple is increasingly performing machine learning tasks locally on the device, on hardware such as the Apple Neural Engine (ANE) or on the company's customized GPUs (graphics processors). Giannandrea and Borchers argued that this approach differentiates Apple's strategy from its competitors.

Listing image by Aurich Lawson / Getty Images

LEAVE A REPLY

Please enter your comment!
Please enter your name here