BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How Edge Computing And Serverless Deliver Scalable Machine Learning Services

Following
This article is more than 7 years old.

Machine Learning, Edge Computing and Serverless are the three key technologies that will redefine the Cloud Computing platforms.

Machine Learning (ML) is becoming an integral part of modern applications. From the web to mobile to IoT, ML is powering the new breed of applications through natural user experiences and inbuilt intelligence.

Graphicstock

After virtualization and containerization, Serverless is emerging as the next wave of compute services. Serverless or Functions as a Service (FaaS) attempts to simplify the developer experience by minimizing the operational overhead in deploying and managing code. Contemporary applications designed as microservices are built on top of FaaS platforms like AWS Lambda, Azure Functions, Google Cloud Functions, and OpenWhisk.

Edge Computing takes compute closer to the applications. Each edge location mimics the public cloud by exposing a compatible set of services and endpoints that the applications can consume. It is all set to redefine enterprise infrastructure.

These three emerging technologies – Serverless, Edge Computing and Machine Learning – will be the key technology drivers for the next generation of infrastructure. The objective of this article is to explain how developers will benefit from the combination of these technologies.

The availability of data, ample storage capacity, and sufficient computing power are essential for implementing Machine Learning. Cloud becomes the natural fit for dealing with Machine Learning. Data Scientists are relying on the cloud for ingesting and storing massive datasets. They are also using pay-as-you-go infrastructure for processing and analyzing the data. With cheaper storage and advanced computing platforms powered by GPUs and FPGAs, the cloud is fast becoming the destination for building complex ML models.

At a high level, there are three steps involved in building ML-based applications. The first phase is training an algorithm with existing data. The second phase is validating the outcome for accuracy with test data. These two steps are repeated till the expected accuracy is achieved by the algorithm. With each iteration, the algorithm learns more about the data and finds new patterns, which will increase the efficiency. What comes out of these two steps is referred to a Machine Learning model, which is carefully tuned to work with new data points. The third and final step is invoking the model with production data to achieve the expected outcome, which may be based on prediction, classification, or grouping of new data.

The first two phases involving ML require heavy lifting, which is tackled by the cloud. The training and test data is stored in cloud storage while the special class of virtual machines is utilized for tuning the algorithm. The interesting fact is that the final, evolved model of ML doesn't need many resources. It is a piece of code that contains the parameters obtained from the previous two phases based on rigorous training and validation. For many scenarios, this model can be embedded within an application as a standalone entity. Based on the predefined parameters, it will analyze new data points as they get generated. When the production dataset submitted to the model becomes significantly different from what is expected, the need for retraining the algorithm arises. At this point, a new model is evolved after repeating the testing and training phases. The updated model is then redeployed to the applications to handle the production datasets.

Janakiram MSV

From an operations standpoint, generating ML models requires provisioning and configuring a variety of storage and compute resources. DevOps teams are involved in managing the infrastructure necessary for this task. The Ops team will have to ensure that the evolved model is delivered to the applications. Each model may be tracked and maintained through a versioning mechanism. Finally, the model should be made available to developers consuming it in their applications. This is where Serverless platforms play a vital role in simplifying the DevOps cycle.

Even though a Machine Learning model is generated in the cloud, it may not be invoked in the cloud. For most scenarios, the model should be kept close to applications. For example, a predictive maintenance model generated to detect malfunctioning of a connected car needs to be closer to the automobile than running in the cloud. These models are typically pushed to the edge of the Cloud Computing layer. Similar to a Content Delivery Network (CDN) that caches static content and video streams across multiple points of presence, the ML model needs to be hosted across multiple locations within an edge network. Applications will invoke the ML model that’s closest to their location. This reduces the latency by avoiding the round trip to the cloud. Since the DevOps teams are responsible for pushing the latest ML model across multiple edge locations, they can automate the process of upgrading the model.

Janakiram MSV

At each edge location, the ML model is deployed as a Serverless function, which is invoked by applications. Since the unit of deployment in FaaS is a function, it is far more efficient than pushing a heavy virtual machine or a container. Each time a new ML model is evolved, a new version is assigned to it, and pushed across all the locations. This makes the process less error prone and efficient.

To summarize, while the heavy lifting for ML will be done in the cloud, the edge layer will simplify the deployment experience and Serverless will streamline the developer experience.

Follow me on Twitter or LinkedInCheck out my website