The Speechly Quick Start helps you get started on developing with Speechly Dashboard.
Speechly is a tool for building real-time multi modal voice user interfaces for touch screens and web applications.
Speechly improves your application’s user experience by adding a natural and intuitive way of interacting with it.
This tutorial walks you through building your first Speechly voice application and testing it out in Speechly Playground.
You can read more on Speechly website.
Are you a developer? Jump straight into a tutorial!
Video Quick Start
This Quick Start will guide you through the basics of building Spoken Language Understanding models with Speechly Dashboard. It covers the following steps:
The best way to start developing with Speechly is to complete this Quick Start.
The first step is to navigate to the Speechly Dashboard in order to create an account and accept the terms and conditions.
After creating a user account, you will land on the Speechly Dashboard main page, where you manage your applications.
Proceed to creating a new application by clicking the blue Create application button.
Name your application, select English as the language and Home Automation as the template.
A Speechly application is configured by providing it with a set of annotated example utterances. Your Home Automation application contains a ready-made configuration that you can deploy by clicking the blue Deploy button in the bottom right corner of the screen.
The deploying should take 1-2 minutes.
Advanced SLU Examples
Once the application has been deployed, the Try button next to Deploy should turn active, and the status bar shows a green dot reading “Deployed”. Now it’s time to test your application, so click Try to go to the Playground.
Click on Tap to start on the bottom of the page, and give your browser the permissions to use the microphone. A microphone button appears.
Tap to start
Now you can click and hold the microphone to start sending audio to the Speechly API. Click and hold either the microphone button or the space bar and say, “Switch off the lights in the kitchen.” You’ll now see the transcript of what you said along with the intent (turn_off) and the entities (lights and kitchen).
You’re done! Now you can continue trying out different utterances. Or else, you can go back to the configuration screen to edit your example phrases in order to teach your model to understand a greater variety of commands.
Next, you can try adding a new intent to the configuration. A useful function to add to a Home Automation application could be that of adjusting the brightness of the lights. Add a new intent, say, set_brightness, which can change the light brightness to a value ranging from 1 to 100 in different rooms. You can learn more about the SAL syntax here.
Learn how you can create a simple React application with a real-time multimodal voice user interface.
Read more »
Last updated by ottomatias on November 24, 2020 at 15:41 +0200
Found an error on our documentation? Please file an issue or make a pull request