Amazon.com, Inc.
Confidence calibration for natural-language understanding models that provides optimal interpretability

Last updated:

Abstract:

Techniques for creating and calibrating natural-language understanding (NLU) machine learning models are described. In certain embodiments, a training service tunes parameters of a function, taking the output from an NLU machine learning model as an input of the function, to calibrate the NLU machine learning model's output to optimize the interpretability of the resulting output, e.g., confidence score(s). Embodiments herein include generating, by the NLU machine learning model, an output based at least in part on an input (e.g., utterance) from a user, and applying a tuned, output modifying function to the output from the NLU machine learning model to generate a modified output. An inference may be generated based at least in part on the modified output.

Status:
Grant
Type:

Utility

Filling date:

14 May 2020

Issue date:

24 May 2022