VMware, Inc.
Managed actions using augmented reality

Last updated:

Abstract:

Disclosed are various examples for performing actions using augmented reality. In some examples, a user interface that includes a field of view is generated. The field of view is currently captured using a camera of a client device. Object data in the field of view is detected. Position data is determined. The position data includes a location of the client device and an orientation vector that indicates a direction the camera of the client device. A request for actions is transmitted to a management service. The request includes the object data and the position data. An action and an identity of a managed object are received from the management service. The user interface is updated to include an identity of a managed object, and a user interface element that when selected causes an action to be performed.

Status:
Grant
Type:

Utility

Filling date:

7 May 2018

Issue date:

30 Mar 2021