Business applications and data must move as close to the data ingestion point as possible, but it’s easier said than done.
copyright by www.datacenterknowledge.com
The rise of IoT, 5G, and AR/VR have long been driving the need to bring computing to the network edge. But now, amid the COVID-19 pandemic, the increasing demand for high-speed networks is accelerating at an unprecedented rate. We’re seeing video conferencing and content streaming at record highs, both of which require higher bandwidth and zero-latency data transfer. While this is usually defined as less than t milliseconds, in this world of hyperconnected remote work, even five milliseconds is too slow.
Networks have never been more critical than they are right now. Between conference calls and streaming media, service providers can’t afford to have lagging, downgraded resolution, and slower caching. To address this, business applications and data must move as close to the data ingestion point as possible, reducing the overall round-trip time and ultimately
allowing applications to access information in real-time.
But that’s easier said than done.
Confronting the Challenges
For service providers in particular, edge computing comes with unique challenges. The proliferation of solutions at the edge means containers are constantly being deployed faster than humans can manage them. While orchestration tools can be used to automate deployment, observability is key in troubleshooting and assuring service in an automated manner.
After all, any service disruption comes with an outpour of customer complaints, so service providers put pressure on IT teams to address the issues as quickly as possible. While IT already has the information needed to identify the source of the problem and solve it, challenges arise when sifting through reams of telemetry data spread across server components. IT teams need the ability to process the data quickly and gain valuable insights based on visible trends.
A Data-Driven Solution
The solution lies with AI capabilities, specifically machine learning, which powers the orchestration solutions that deliver predictive and scalable operations across workloads. Combining machine learning with real-time network monitoring can provide the insights necessary to power automated tools capable of provisioning, instantiating, and configuring physical and virtual network functions quicker and more accurately than if a human carried out the task. This process also means IT teams can spend their time on mission-critical, higher-value initiatives that contribute to the bottom line.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Bringing AI to the Cloud
Machine learning also has a critical role in application life cycle management at the edge. In an environment that consists of a few centralized data centers, operators can determine the optimal performance conditions of the application’s virtual network functions (VNF). As the environment disaggregates into thousands of small sites, VNFs have more sophisticated needs that must be catered to accordingly.
Because operators don’t have the bandwidth to cope with these needs, machine learning algorithms can run all of the individual components through a pre-production cycle to evaluate how they will behave in a production site, giving operations staff the reassurance that the apps being tested will work the edge.[…]
read more – copyright by www.datacenterknowledge.com
Business applications and data must move as close to the data ingestion point as possible, but it’s easier said than done.
copyright by www.datacenterknowledge.com
The rise of IoT, 5G, and AR/VR have long been driving the need to bring computing to the network edge. But now, amid the COVID-19 pandemic, the increasing demand for high-speed networks is accelerating at an unprecedented rate. We’re seeing video conferencing and content streaming at record highs, both of which require higher bandwidth and zero-latency data transfer. While this is usually defined as less than t milliseconds, in this world of hyperconnected remote work, even five milliseconds is too slow.
Networks have never been more critical than they are right now. Between conference calls and streaming media, service providers can’t afford to have lagging, downgraded resolution, and slower caching. To address this, business applications and data must move as close to the data ingestion point as possible, reducing the overall round-trip time and ultimately
allowing applications to access information in real-time.
But that’s easier said than done.
Confronting the Challenges
For service providers in particular, edge computing comes with unique challenges. The proliferation of solutions at the edge means containers are constantly being deployed faster than humans can manage them. While orchestration tools can be used to automate deployment, observability is key in troubleshooting and assuring service in an automated manner.
After all, any service disruption comes with an outpour of customer complaints, so service providers put pressure on IT teams to address the issues as quickly as possible. While IT already has the information needed to identify the source of the problem and solve it, challenges arise when sifting through reams of telemetry data spread across server components. IT teams need the ability to process the data quickly and gain valuable insights based on visible trends.
A Data-Driven Solution
The solution lies with AI capabilities, specifically machine learning, which powers the orchestration solutions that deliver predictive and scalable operations across workloads. Combining machine learning with real-time network monitoring can provide the insights necessary to power automated tools capable of provisioning, instantiating, and configuring physical and virtual network functions quicker and more accurately than if a human carried out the task. This process also means IT teams can spend their time on mission-critical, higher-value initiatives that contribute to the bottom line.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Bringing AI to the Cloud
Machine learning also has a critical role in application life cycle management at the edge. In an environment that consists of a few centralized data centers, operators can determine the optimal performance conditions of the application’s virtual network functions (VNF). As the environment disaggregates into thousands of small sites, VNFs have more sophisticated needs that must be catered to accordingly.
Because operators don’t have the bandwidth to cope with these needs, machine learning algorithms can run all of the individual components through a pre-production cycle to evaluate how they will behave in a production site, giving operations staff the reassurance that the apps being tested will work the edge.[…]
read more – copyright by www.datacenterknowledge.com
Share this: