An application that updates its own user interface based on user's voice commands using speech recognition and machine learning
An application that updates its own user interface based on user’s voice commands using speech recognition and machine learning.
Ever wonder what it’s like to have Jarvis from Iron Man? Well now with the advances in machine learning and speech recognition, what if we build web applications with something like Jarvis? This is a simple proof of concept that demonstrates how users can now build web UIs with simple voice commands.
This application is built using RecorderJS to record audio, Bing Speech API to recognize user’s voice commands while it also uses LUIS (Language Understanding Intelligent Services) to understand the user’s intentions, which are interpreted and used for updating cells in a web user interface.
Clone this repo and then install dependencies:
git clone https://github.com/ritazh/sttdemo.git
cd sttdemo
npm i
Run the application then hit your browser with http://localhost:3000
:
node app.js
Setup your own keys for Bing Speech and LUIS:
Many thanks to @rickbarraza for designing and developing the user interface for this application.
Many thanks to @cwilso for developing and maintaining AudioRecorder for the awesome UI components in this app.
Licensed using the MIT License (MIT); Copyright © Microsoft Corporation. For more information, please see LICENSE.