New features and improvements to Speechly SLU API and client libraries
In this page, we’ll publish improvements and fixes to Speechly products bi-weekly.
Ready-made UI components for microphone and transcript for hastened development on iOS.
An Android client for easy integration with Speechly is now available.
The Android client enables quick and efficient development of Speechly applications for native Android applications.
Support for natural time expressions such as “fifteen thirty”, “20 past nine” or “5 minutes after midnight” with $SPEECHLY.TIME standard variable and entity type Time.
New debugging feature in Speechly CLI tool displays example utterances for a given configuration and calculates statistics about occurences of intents and entities.
Typically Speechly SLU models are adapted for a specific use case, which helps improve speech recognition accuracy. Now you can also use unadapted ASR for pure transcription use cases. You can test the speech recognition performance here
Speechly Annotation Language supports natively phone numbers, emails, person names and website addresses. This enables developers to easily build voice experiences that contain these data types, for example something like “Add contact with name Jack Johnston and email address jack dot johnston at gmail dot com”
Streaming audio and handling API results is done by using multiple threads for improved performance. Main UI thread is never blocked by Speechly, resulting in a faster UI.
New baseline model for speech recognition improves ASR accuracy in all use cases.
Utterances that are out of domain (ie. no example utterances provided in the model configuration) now have better speech recognition results.
Last updated by ottomatias on February 23, 2021 at 11:00 +0200
Found an error on our documentation? Please file an issue or make a pull request
Content