Although artificial intelligence has been touted for its potential to increase efficiency across several industries, the technology has hit several roadblocks of its own. The inefficiency of centralized servers has caused some systems to experience high latency issues or even outright crashes. That said, there may be a solution to these inefficiencies: edge computing.
Edge computing refers to data processing at networks and devices nearer to the user, as opposed to traditional computing methods of centralized servers and data centers. Edge computing can be seen in action with everything from point of sale (POS) systems to Internet of Things (IoT) devices and sensors — essentially, edge devices are anything that computes locally and interacts with the cloud. Now, we are beginning to see artificial intelligence models added to that category of edge computing.
The intersection of AI and edge computing
Artificial intelligence expert Siddhartha Rao has firsthand experience working with both AI and cloud computing technologies. After more than a decade of experience working with leading technology companies like Amazon Web Services, Rao now serves as the co-founder and CEO of Positron Networks, an AI company focused on artificial intelligence solutions for the scientific research community. Given the unique needs of this community, Rao is particularly interested in the intersection of AI and edge computing.
“There are several reasons why edge computing has become such a prominent paradigm, including lowering the latency of user interactions, lowering cloud computing costs, and supporting offline user experiences,” explains Rao. “These benefits are in contention with other objectives of edge computing, such as improving margins by lowering device costs, extending battery life with low-power processors, or downloading model updates in low-bandwidth environments such as developing countries.”
However, this begs the question of how artificial intelligence models — which require significant computing power — can be run “on the edge.” Rao explains that successfully transitioning artificial intelligence models to the edge requires simplifying these operations.
“For context, a model is a sequence of linear algebra (matrix mathematics) operations sequentially executed to predict a response based on an input,” Rao explains. “Machine learning engineers and scientists reduce the mathematical complexity of these operations by applying various techniques. The result is smaller models requiring fewer computing cycles to execute, lowering computing requirements and improving margins while improving battery life.”
“This engineering has several positive impacts on the industry,” he continues. “For example, models that execute with lower latency, use less battery power, and produce less heat. They also use fewer cloud computing resources to execute, further lowering their environmental impact, and they reduce model update bandwidth consumption. All these benefits improve the user experience while improving margins by lowering the cost of goods.”
Why edge computing is the future of AI
The chief benefit of edge computing is that it significantly improves the user experience. Latency is one of the biggest challenges any technological innovation faces, especially considering humans’ short (and decreasing) attention spans.
“If an edge device must always go to the cloud for a prediction, the impact on latency degrades the user experience, reducing customer engagement,” Rao explains. “Less engaged customers are less likely to leverage the device, reducing utility and adoption.”
Edge computing can also potentially reduce the costs of artificial intelligence technology to a point that is more attainable for smaller businesses. After all, maintaining the servers required to operate large-scale models is costly. Edge computing reduces the complexity of these applications to allow them to run on the edge.
However, Rao also cautions against some of the consequences that edge computing in the AI sphere could have, referring to them as “tradeoffs” for the benefits it offers. “Higher error and hallucination rates as using lower precision subsequently affects the accuracy of the answer,” he explains. “Knowledge distillation can amplify bias and fairness problems in the larger models. Finally, edge computing requires hiring expensive, specialized machine learning talent and acquiring expensive machine learning training infrastructure such as GPUs, which are in high demand.”
As an example of the successful intersection of artificial intelligence and edge computing technology, Rao points to a use case from his time at AWS that successfully simplified a model at scale. “A model I sponsored at AWS was used in the Opus codec,” he says. “This codec is used by over 1 billion devices globally and was recently upgraded by Amazon with a machine learning-based packet concealment algorithm that recovered audio streams even in lossy networks. This codec can be used on devices with limited processing power, such as a Raspberry Pi or desk phone, to predict audio samples in milliseconds.”
Rao also mentions a use case that has shown particular potential in the defense sector. “Real-time video was processed on cameras on a soldier’s rifle scope to guide them on whether a combatant’s movements were suspicious or possibly in support of a malicious activity such as terrorism,” he adds. “The soldier could then focus surveillance on sensitive and high-risk battlefield areas. In both examples, complex, real-time audio or video was being processed by low-powered microprocessors running on IoT devices.”
Indeed, these use cases are perfect examples of how artificial intelligence computed on the edge delivers superior results in terms of efficiency and cost. “If the latest AI models cannot be optimized to run on the edge, their applications will be limited to cloud applications and user experience,” Rao concludes.