Nutanix, Inc.
APPARATUS AND METHOD FOR DEPOYING A MACHINE LEARNING INFERENCE AS A SERVICE AT EDGE SYSTEMS
Last updated:
Abstract:
An example edge system of an Internet of Things system may include a memory configured to store a machine learning (ML) model application having a ML model a machine, and a processor configured to cause a ML inference service to receive a request for an inference from a ML model application having a ML model, and load the ML model application from the memory into an inference engine in response to the request. The processor is further configured to cause the MT inference service to select a runtime environment from the ML model application to execute the ML model based on a hardware configuration of the edge system, and execute the ML model using the selected to provide inference results. The inference results are provided at an output, such as to a data plane or to be stored in the memory.
Utility
25 Jul 2019
12 Nov 2020