Microsoft Corporation
ADJUSTING USER EXPERIENCE FOR MULTIUSER SESSIONS BASED ON VOCAL-CHARACTERISTIC MODELS
Last updated:
Abstract:
Techniques for adjusting user experiences for participants of a multiuser session by deploying vocal-characteristic models to analyze audio streams received in association with the participants. The vocal-characteristic models are used to identify emotional state indicators corresponding to certain vocal properties being exhibited by individual participants. Based on the identified emotional state indicators, probability scores are generated indicating a likelihood that individual participants are experiencing a predefined emotional state. For example, a specific participant's voice may be continuously received and analyzed using a vocal-characteristic model designed to detect vocal properties are consistent with a predefined emotional state. Probability scores may be generated based on how strongly the detected vocal properties correlate with the vocal-characteristic model. Responsive to the probability score that results from the vocal-characteristic model exceeding a threshold score, some remedial action may be performed with respect to the specific participant that is experiencing the predefined emotional state.
Utility
27 Feb 2020
2 Sep 2021