As research teams at Google, Microsoft, Facebook, IBM, and even Amazon have broken new ground in artificial intelligence in recent years, Apple always seemed to be the odd man out. It was too closed off to meaningfully integrate AI into the company’s software—it wasn’t a part of the research community, and didn’t have developer tools available for others to bring AI to its systems.

copyright by www.qz.com

SwissCognitiveThat’s changing. Through a slew of updates and announcements today at its annual developer conference, Apple made it clear that the machine learning found everywhere else in Silicon Valley is foundational to its software as well, and it’s giving developers the power to use AI in their own iOS apps as well.

Developers, developers, developers

The biggest news today for developers looking to build AI into their iOS apps was barely mentioned on stage. It’s a new set of machine learning models and application protocol interfaces (APIs) built by Apple, called Core ML. Developers can use these tools to build image recognition into their photo apps, or have a chatbot understand what you’re telling it with natural language processing. Apple has initially released four of these models for image recognition, as well as an API for both computer vision and natural language processing. These tools run locally on the user’s device, meaning data stays private and never needs to process on the cloud. This idea isn’t new—even data hoarders like Google have realized the value of letting users keep and process data on their own devices.

Apple also made it easy for AI developers to bring their own flavors of AI to Apple devices. Certain kinds of deep neural networks can be converted directly into Core ML. Apple now supports Caffe, an open-source software developed by the University of California-Berkeley for building and training neural networks, and Keras, a tool to make that process easier. It notably doesn’t support TensorFlow, Google’s open-source AI framework, which is by far the largest in the AI community. However, there’s a loophole so creators can build their own converters. (I personally expect a TensorFlor converter in a matter of days, not weeks.)

Some of the pre-trained machine learning models that Apple offers are open-sourced Google code, primarily for image recognition. […]

read more – copyright by www.qz.com