As research teams at Google, Microsoft, Facebook, IBM, and even Amazon have broken new ground in Artificial Intelligence knows many different definitions, but in general it can be defined as a machine completing complex tasks intelligently, meaning that it mirrors human intelligence and evolves with time. in recent years, Apple always seemed to be the odd man out. It was too closed off to meaningfully integrate into the company’s software—it wasn’t a part of the research community, and didn’t have developer tools available for others to bring to its systems.
copyright by www.qz.com
That’s changing. Through a slew of updates and announcements today at its annual developer conference, Apple made it clear that the machine found everywhere else in Silicon Valley is foundational to its software as well, and it’s giving developers the power to use in their own iOS apps as well.
Developers, developers, developers
The biggest news today for developers looking to build into their iOS apps was barely mentioned on stage. It’s a new set of machine models and application protocol interfaces (APIs) built by Apple, called Core . Developers can use these tools to build into their photo apps, or have a Chatbots are computer programs which were engineered to converse in spoken or written form with humans. They are usually used in dialogue systems with a limited topic range. For example, they can answer basic customer questions or help you buy the correct train ticket. understand what you’re telling it with . Apple has initially released four of these models for , as well as an API for both computer vision and . These tools run locally on the user’s device, meaning data stays private and never needs to process on the cloud. This idea isn’t new—even data hoarders like Google have realized the value of letting users keep and process data on their own devices.
Apple also made it easy for developers to bring their own flavors of to Apple devices. Certain kinds of deep Neural Networks are simplified abstract models of the human brain. Usually they have different layers and many nodes. Each layer receives input on which it carries out simple computations, and passes on the result to the next layer, by the final layer the answer to whatever problem will be produced. can be converted directly into Core . Apple now supports Caffe, an open-source software developed by the University of California-Berkeley for building and training neural networks, and Keras, a tool to make that process easier. It notably doesn’t support TensorFlow, Google’s open-source framework, which is by far the largest in the community. However, there’s a loophole so creators can build their own converters. (I personally expect a TensorFlor converter in a matter of days, not weeks.)
Some of the pre-trained machine models that Apple offers are open-sourced Google code, primarily for . […]