Amazon rolls out developer tools to improve Alexa voice apps


Amazon’s adding a trio of new tools to the Alexa Skills Kit, a suite of self-service APIs and resources for conversational app development, designed to improve the quality of experiences developed for its Alexa assistant. The first two, which are now generally available — Natural Language Understanding (NLU) Evaluation Tool and Utterance Conflict Detection — promise to enhance overall voice model accuracy, while Get Metrics API (which is in beta) enables the analysis of metrics like unique customers in third- or first-party aggregatory platforms.

“These tools help complete the suite of Alexa skill testing and analytics tools that aide in creating and validating your voice model prior to publishing your skill, detect possible issues when your skill is live, and help you refine your skill over time,” wrote Amazon product marketing manager Leo Ohannesian. “[We hope these] three new tools [help] to create … optimal customer experience[s].”

The NLU Evaluation Tool tests batches of utterances and compares how they’re interpreted by a voice app’s natural language processing (NLP) model against expectations. (As Ohannesian notes, overtraining an NLU model with too many sample utterances can reduce its accuracy.) Instead of adding sample utterances to an interaction model, NLU Evaluations can run with commands users are expected to say, and in this way isolate new training data by bubbling up problematic utterances that resolve to the wrong intent.

The NLU Evaluation Tool additionally supports regression testing, allowing developers to create and run evaluations after adding new features to voice apps. And it’s able to perform accuracy measurements with anonymized frequent live utterances surfaced in production data, which help to gauge the impact on accuracy for any changes made to the voice model.

As for Utterance Conflict Detection, it’s intended to detect utterances that are accidentally mapped to multiple intents, another factor that can reduce NLP model accuracy. It’s automatically run on each model build and can be used prior to publishing the first version of the app or as intents are added over time.

Lastly, there’s the Get Metrics API (Beta), which lets Alexa developers more easily analyze metrics like unique customers in environments like Amazon Web Services CloudWatch. Plus, it supports the creation of monitors, alarms, and dashboards that spotlight changes that could impact customer engagement.

Amazon says the GetMetrics API is available for in all locales and currently supports the Custom skill model, the pre-built Flash Briefing model, and the Smart Home Skill API.



READ SOURCE