NVIDIA Corporation
SIMULATING VIEWPOINT TRANSFORMATIONS FOR SENSOR INDEPENDENT SCENE UNDERSTANDING IN AUTONOMOUS SYSTEMS

Last updated:

Abstract:

In various examples, sensor data used to train an MLM and/or used by the MLM during deployment, may be captured by sensors having different perspectives (e.g., fields of view). The sensor data may be transformed--to generate transformed sensor data--such as by altering or removing lens distortions, shifting, and/or rotating images corresponding to the sensor data to a field of view of a different physical or virtual sensor. As such, the MLM may be trained and/or deployed using sensor data captured from a same or similar field of view. As a result, the MLM may be trained and/or deployed--across any number of different vehicles with cameras and/or other sensors having different perspectives--using sensor data that is of the same perspective as the reference or ideal sensor.

Status:
Application
Type:

Utility

Filling date:

21 Sep 2021

Issue date:

24 Mar 2022