Everything Artificial Intelligence has ever been, hopes to be, or currently is to the enterprise has been encapsulated in a single emergent concept, a hybrid term, simultaneously detailing exactly where it is today, and just where it’s headed in the coming year.
Copyright by www.insidebigdata.com
The ModelOps notion is so emblematic of AI because it gives credence to its full breadth (from machine learning to its knowledge base), which Gartner indicates involves rules, agents, knowledge graphs, and more.
ModelOps is about more than simply operationalizing and governing AI models. It’s about doing so quickly, at scale, with full accountability, and in a manner that resolves the most mission critical business problems—if not those for society, as well.
Moreover, it involves doing so onsite while leveraging the advantages of the cloud and, when it comes to AI’s machine learning prowess, with a range of approaches rooted in supervised, unsupervised, and even reinforcement learning.
Implicit to these capabilities is the need to position machine learning models at the edge, supersede their traditional training data limitations (and methods), and imbibe everything from streaming to static data for a predictive exactness based on the most current data possible.
Or, as SAS Chief Data Scientist Wayne Thompson put it, “Right now, most organizations are just checking the scores for the model and seeing if the model’s scores have changed using an older offline model. What is state of the art is actually putting the model into the training environment, and deploy and train simultaneously and update the model’s weights.”
In many ways, ModelOps is just an updated term for model management, albeit one that acknowledges that AI is more than mere statistics while prioritizing timely deployments. ModelOps is perfected when organizations can expedite the creation and operation of tailored models for any specific use case. Thompson cited a banking example where the institution “wanted a push button system and have that thing run much like a factory. And, yes they want to be able to checkpoint and see if things are going out of whack at any point in time, but they want to truly automate.”
Platforms tailored around model management facilitate these boons in plentiful ways. Firstly, they can score models and their results by placing “these models into packages like this ASTORE or into a scoring function and hand that over to a much more conservative, much more structured, much more highly regulated [audience]: something that has 99999 reliability associated with it,” Thompson said. They can also integrate model production into workflows with APIs, illustrating the cloud’s mounting importance to AI. Most importantly, they can input models into production points to dynamically adjust weights and measurements with real data, as opposed to stale or historic data.
The Internet of Things and edge computing provide peerless opportunities to update models in real time to counter model drift, which will otherwise intrinsically occur over time. Whereas ModelOps use cases in finance involve automating the scaffolding and delivery of models—at scale—for targeted customer micro-segmentation, compelling IoT deployments center on preeminent public (and private) health concerns with streaming data and, oftentimes, computer vision. The predominant problem with the so called AIoT is inputting credible models into endpoint gadgets “because deep learning models are so big,” Thompson reflected.
A reliable solution is to position them into an “ASTORE file, which is just a binary blob that we pack all these co-efficients into so that it’s transparent to you,” Thompson remarked. “That binary file gets stored and compacted, and then can be shared.” With this method, organizations can support computer vision and object detection use cases to ensure people preserve social distancing, implement contact tracing, or just monitor equipment asset health in the Industrial Internet. Moreover, they can leverage an approach in which models are adjusting to the actual production data, while utilizing architecture and hardware best practices for TinyML. […]