Skip to main content

Open Sourcing Violet — A Voice Application Framework

Vineet Sinha
Jan 29 - 5 min read

Voice-enabled devices are quickly transforming the way we interact with technology — empowering us to be more productive in our daily lives. For instance, managing your to-do list, setting timers, or even discovering restaurants in your neighborhood can now be done with simple voice commands to Siri, Amazon Alexa or Google Home.

These capabilities are created via skills and are being built by a large developer community. The challenge, however, is that most skills are command-response driven, in contrast to the need for human-like conversations. For example:

  1. Consider something as simple as updating your task list with bought milk. Doing this might mean a Voice app noticing that your grocery list has an entry for ‘milk and creamer,’ reminding you to buy it and then checking with you to confirm it’s done before marking it as complete.
  2. When looking for food, we often want to find something like ‘what are the most highly rated, moderately expensive, Korean restaurants within a 5-mile drive?’ The challenge here is that it is hard to remember this large set of constraints when asking the system and that these set of constraints are often context dependent, i.e. is it a date night or do you want just something quick? Being able to navigate such constraints means having a conversation with a Voice application.
  3. Even e-commerce, the hallmark of the web, has complications that need to be addressed when thinking about Voice. For example, one might consider buying something popular to be easy for Voice requests, but these also need additional parameters from the shoppers such as the address you want the items shipped to, how fast you want them shipped, and the credit card to use for this purchase.

To help with such needs, today we are excited to extend the power of voice to the enterprise by open sourcing Violet, a voice application-framework that provides support for building conversational bots and apps on Amazon Alexa. Violet provides a conversational engine that runs as an Alexa Skill to abstract away the complexity of implementation and enable developers to focus on business logic and conversational flow.

With Violet, sophisticated conversations are supported using goals. For example, when a shopper indicates an interest in buying a rake, a script can simply add rake-finding and purchasing as goals to be met. The script then can ask for necessary information in order to finalize a purchase and meet the goal — which brand, what size, quantity, and billing information.

Beyond goals, Violet makes building basic voice scripts require minimal coding and supports back-end integration complexity being abstracted away into plug-ins. Finally, built scripts can be tested locally using a web interface and deployed to a web server for easy registration with Amazon’s Alexa API.

A Walkthrough

We have a collection of samples and you should be able to deploy them straight from GitHub by clicking on the “Deploy to Heroku” button.

Alternatively, you can build a voice script locally:

a) Install the dependencies via npm

npm install violet --save

b) Try a simple intent:

var violet = require('violet/lib/violet').script();violet.respondTo({
expecting: "Whats next on my todo",
resolve: function(response) {
var nextItem = todoSvc.getNextItem();
response.say(`Next item on your list is ${nextItem}`);
}});

Once you have the service running, try it out on the web-interface.

When you are ready, click the “Registration” button and follow the instructions.

You can now try out the Skill with your Amazon Echo device or use EchoSim.io to try it on your laptop.

How it Works

Work on Violet started early last year, when I had the opportunity to work with the Salesforce Immersion team on how we can leverage voice capability to build great user experiences. We quickly got very excited about the potential voice-enabled devices present-s-, but it also comes with a host of challenges. That’s why today we are excited to share Violet with the open source community, in hopes that it will help others utilize the power of voice.

Violet is built in Node.js and takes advantage of the Alexa app framework. Each voice script is primarily a collection of intents (declared with respondTo calls) and goals (declared with defineGoal calls). The Violet core engine uses the declared information to extract input parameters so that it can build a map of what can happen when and what code to call. It then registers this maps with the underlying Alexa API.

When a user speaks to a voice-enabled device, the device is first triggered using the selected wake word, then all of the audio is sent to an Amazon Server where it is first converted from analog to digital. From there the Violet engine is called and the right part in the registered map is selected.

When Amazon calls the Violet core engine with an intent, Violet loads any relevant plug-ins, checks against the goals, and calls the right piece of code in the Voice script. After the voice script runs, Violet checks if there are any other goals to execute, and finally communicates a response back to Amazon’s Alexa API in the form of an SSML string.

Let us know your thoughts

We want to hear from you! If the pre-built voice-scripts do not meet your needs, we hope that they are simple enough for you to modify or use as guideline to build your own voice-scripts. If you want to do something more complicated, have a look at a plug-in, or dive into the Violet code. We would love to hear your thoughts.

If you have questions while digging into this, feel free to find us on Gitter.

Interested in Joining the Pilot?

Are you building a serious voice application in the next few months and interested in using Violet? We are working with a select set of teams to make sure that the first few projects built on Violet go smoothly. If you are interested, fill out this form.

Acknowledgements

Violet would not have been possible without help from a number of people. In particular, thanks to the in-depth feedback from: Ashita Saluja, Gina Nguyen, John Brunswick, and Nilan Fernando.


Learn more about Alexa Skills in Salesforce Trailhead.

Related Open Source Articles

View all