Product Updates

New features and improvements to Speechly spoken language understanding API and client libraries

May 4, 2022

April 14, 2022

  • Unity Client Library: With the Speechy Client Library for Unity, developers can now easily add Voice UIs as a Feature to their experiences built with Unity. Read our blog post or check out the docs to learn more.
  • Command Line Tool: Version 0.8.2 released.

March 20, 2022

  • Dashboard: A bunch of small UI tweaks in Project Settings and Configure views to enhance the usability of the Dashboard.

February 28, 2022

  • Command Line Tool: Version 0.8.0 released which adds utterances command that prints a sample of recent utterances for an application.

February 17, 2022

  • Dashboard: Added advanced settings such as silence triggered segmentation, rule-based NLU and others to tweak your application eve further. Find them by going to Application > Settings.
  • Fixed: Several under-the-hood improvements to make your developing experience faster and smoother.

February 11, 2022

  • Dashboard: Intents and entities now have a unified look throughout the Dashboard. Blue for entity and green for intent. Happy developing!

January 18, 2022

  • Dashboard: Use whatever case type for Entities and Intents you like: camelCase, snake_case and UPPERCASE are now fully supported.
  • Client Libraries: @speechly/browser-client@v1.1.0 library now contains both ES2015 and UMD module bundles that work with HTML <script> tag. Use it if you need lower level access to Speechly API than Speechly’s Web Components provide. ES2015 module bundle is now also used for Node, replacing CommonJS modules.

January 13, 2022

  • Command Line Tool: Improve our CLI tool to work more intuitively, while remaining backwards compatible.
  • Dashboard: See when the application was deployed straight from the Dashboard. Hover the date to see an exact timestamp.
  • ASR improvements: Use SAL syntax in lookup entities to create more powerful lookups with less typing.

January 5, 2022

  • Dashboard: We’ve redesigned the dashboard to match the new Speechly identity. Head over to speechly.com to see it in it’s full glory.

December 27, 2021

  • Dashboard: Integrating your Speechly configuration just got easier. Open your application and head over to the new “Integrate” tab for instructions on how to start developing locally - or in CodePen if that’s your fancy. At the same go we did some visual polishing.

December 22, 2021

  • Dashboard: Edit and create lookups straight in the Dashboard using the new CSV editor. Enjoy a more efficient navigation experience with a new navigation bar design.

December 9, 2021

  • Dashboard: View & edit CLI deployed applications, Import and convert an Alexa Interaction Model to a Speechly configuration and a redesigned application level navigation.
  • Command Line Tool: Import and convert an Alexa Interaction Model to a Speechly configuration

November 29, 2021

  • Dashboard: New sign-up flow and new application configuration view.
  • Speechly API: New data centers in the US (east and west coasts), UK and Singapore. This reduces transcript latency in client applications.
  • Client Libraries and Demos: All of our open-source Client Libraries, example applications, and demos are now in the same GitHub repository. Our developer community engagement will in the future be focused around this repository.

November 1, 2021

  • React Voice Form Components: A new UI library with multi-modal browser widgets that can be controlled with speech, tap, pointer, and keyboard. Available with npm. Documentation here.
  • Speechly Web UI Components: Latest version adds support for “Tap-to-Talk” feature. A short tap on the microphone button will start recording, and the connection is closed automatically when the user stops talking.
  • Command Line Tools: Version 0.5.4 introduces functionality for evaluating a deployed Speechly configuration using a list of test utterances (text only). Documentation here.

October 8, 2021

September 6, 2021

  • Speechly API: Improved handling of initial latency when streaming audio to a new app_id, improved handling of client connections.
  • SLU Engine: New Standard Variable and Entity Type for handling (US style) street addresses.
  • Command Line Tool: Version 0.4.2 released.
  • Browser client: Version 1.0.17 released (with automatic gain control of audio).

June 18, 2021

  • Speechly API: A new and improved load balancer.
  • Browser client: Version 1.0.15, better handling of audio capture and websockets.

May 25, 2021

  • ASR improvements: New baseline ASR model.
  • Dashboard: The dashboard now has a button (“SHOW SAMPLE”) that displays a set of random example utterances generated from the given SAL configuration.
  • Web UI components: Major update with a unified API for both JS and React, a new TranscriptDrawer component with just-in-time usage hints for an improved user onboarding experience, a listening prompt to indicate when the app is listening, and a developer-triggerable command to acknowledge that an utterance was received.

May 11, 2021

  • Browser client: v1.0.13, stability improvements.
  • SLU engine: Make entity and intent detectors more robust against inadvertent non-device-directed speech.
  • Command Line Tool: Version 0.4.1, show amount of annotated audio for each app_id.

April 23, 2021

  • Command Line Tool: Version 0.4, with support for displaying utterance statistics for each app_id.
  • Documentation: Major update to developer documentation.
  • Fixed: Several fixes to handling entities and segments.

April 9, 2021

  • iOS client: Version 0.3 released, includes update UI components and other improvements.
  • Project-based login: Support for logging in using a project_id. This allows an application to easily switch between app_ids.

March 5, 2021

  • Training time estimates: Changed the way how we estimate deployment times when training the models.
  • Playground: Pressing ‘alt’ temporarily enables the Try-button so that one can use the Playground with an old model while a new one is being trained.
  • Model deployment: More robust training infrastructure that reduces model training times in certain situations.

March 19, 2021

  • SLU engine: Update to entity detection model with increased accuracy.
  • Documentation: More examples of gRPC API usage.
  • Browser client: More robust audio recording.
  • ASR improvements: New baseline ASR model.

February 22, 2021

  • iOS UI Components: Ready-made UI components for microphone and transcript for hastened development on iOS.
  • Android Client: An Android client for easy integration with Speechly is now available. The Android client enables quick and efficient development of Speechly applications for native Android applications.
  • Support for new entities: Support for natural time expressions such as “fifteen thirty”, “20 past nine” or “5 minutes after midnight” with $SPEECHLY.TIME standard variable and entity type Time.
  • Fixed: Better support for long utterances and other fixes

February 8, 2021

  • Debugging models in Command Line Tool: New debugging feature in Command Line Tool displays example utterances for a given configuration and calculates statistics about occurrences of intents and entities.
  • Support for unadapted ASR: Typically Speechly SLU models are adapted for a specific use case, which helps improve speech recognition accuracy. Now you can also use unadapted ASR for pure transcription use cases. You can test the speech recognition performance here
  • Support for new entities: Speechly Annotation Language supports natively phone numbers, emails, person names and website addresses. This enables developers to easily build voice experiences that contain these data types, for example something like “Add contact with name Jack Johnston and email address jack dot johnston at gmail dot com”

January 25, 2021

  • Improved audio handling in browser clients: Streaming audio and handling API results is done by using multiple threads for improved performance. Main UI thread is never blocked by Speechly, resulting in a faster UI.

January 11, 2021

  • ASR Improvements: New baseline model for speech recognition improves ASR accuracy in all use cases.
  • Improvements to ASR adaptation: Utterances that are out of domain (ie. no example utterances provided in the model configuration) now have better speech recognition results.

Profile image for Mathias Lindholm

Last updated by Mathias Lindholm on February 3, 2022 at 13:47 +0200

Found an error on our documentation? Please file an issue or make a pull request