Intelligence at the edge is exciting. The edge allows devices to compute and analyze data closer to the user as opposed to a centralized data center far away, which benefits the end user in many ways.
copyright by www.rcrwireless.com
Intelligence at the edge is exciting. The edge allows devices to compute and analyze data closer to the user as opposed to a centralized data center far away, which benefits the end user in many ways. It promises low latency as the brains of the system are close by, rather than thousands of miles away in the cloud; it functions with a local network connection rather than with an internet connection, which may not always be available; and it offers a stronger guarantee of privacy because a user’s information is not transmitted and shared to remote servers. We will soon be able to process data closer to (or even inside) endpoint devices, so that we can reap the full potential of the intelligent analytics and decision-making.
But the computing power, storage and memory required to run current AI algorithms at the end point is hampering our ability to optimize processing there. These are serious limitations, especially when the operations are time-critical.
To make intelligence at the edge a reality, being able to understand, represent and handle context is most critical.
What does that mean? It means that we give computing systems the tools to identify and learn what is needed, and only what is needed . Why generate and analyze useless or low-priority data? Capture what is needed for the purpose required and move on. Intelligent machines at the edge should be able to “learn” new concepts needed to perform their tasks efficiently and they should also be able to “systematically forget” the concepts not needed for their tasks.
Humans learn contextually. Computer systems at the edge don’t – at least, not quite yet. But when they can, the power of AI and machine learning will be transformational.
The “Edge” of innovation
There are many definitions of context. Among other things, context can be relational, semantic, functional or positional. For our discussion we will use this [1] definition: “A system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user’s task.”
Here’s a simple example in which object-recognition is strongly influenced by contextual information [1]. Recognition makes assumptions regarding object identities based on its size and location in the scene. Consider the region within the red boundary in both images. Taken in isolation, it is nearly impossible to classify the object within that region because of the severe blur in both images. […]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
read more – copyright by www.rcrwireless.com
Intelligence at the edge is exciting. The edge allows devices to compute and analyze data closer to the user as opposed to a centralized data center far away, which benefits the end user in many ways.
copyright by www.rcrwireless.com
Intelligence at the edge is exciting. The edge allows devices to compute and analyze data closer to the user as opposed to a centralized data center far away, which benefits the end user in many ways. It promises low latency as the brains of the system are close by, rather than thousands of miles away in the cloud; it functions with a local network connection rather than with an internet connection, which may not always be available; and it offers a stronger guarantee of privacy because a user’s information is not transmitted and shared to remote servers. We will soon be able to process data closer to (or even inside) endpoint devices, so that we can reap the full potential of the intelligent analytics and decision-making.
But the computing power, storage and memory required to run current AI algorithms at the end point is hampering our ability to optimize processing there. These are serious limitations, especially when the operations are time-critical.
To make intelligence at the edge a reality, being able to understand, represent and handle context is most critical.
What does that mean? It means that we give computing systems the tools to identify and learn what is needed, and only what is needed . Why generate and analyze useless or low-priority data? Capture what is needed for the purpose required and move on. Intelligent machines at the edge should be able to “learn” new concepts needed to perform their tasks efficiently and they should also be able to “systematically forget” the concepts not needed for their tasks.
Humans learn contextually. Computer systems at the edge don’t – at least, not quite yet. But when they can, the power of AI and machine learning will be transformational.
The “Edge” of innovation
There are many definitions of context. Among other things, context can be relational, semantic, functional or positional. For our discussion we will use this [1] definition: “A system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user’s task.”
Here’s a simple example in which object-recognition is strongly influenced by contextual information [1]. Recognition makes assumptions regarding object identities based on its size and location in the scene. Consider the region within the red boundary in both images. Taken in isolation, it is nearly impossible to classify the object within that region because of the severe blur in both images. […]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
read more – copyright by www.rcrwireless.com
Share this: