Amazon.com, Inc.
Massively parallel real-time database-integrated machine learning inference engine
Last updated:
Abstract:
Techniques for massively-parallel real-time database-integrated machine learning (ML) inference are described. An ML model is deployed as one or more model serving units behind an endpoint. The ML model can be associated with a virtual table or function, and a query that is received that references the virtual table or function can be processed by issuing inference requests to the endpoint by the query execution engine(s).
Status:
Grant
Type:
Utility
Filling date:
13 Nov 2018
Issue date:
30 Aug 2022