Voice enabling your Flutter App — Complete guide of how you can add a Voice Assistant to your existing Flutter Application in an hour with the Alan Platform

Alan AI
6 min readJun 27, 2020

--

Alan is a complete Conversational Voice AI Platform that lets you Build, Debug, Integrate, and Iterate on a voice assistant for your application.

Previously, you would’ve had to work from the ground-up: learning Python, creating your machine learning model, hosting on the cloud, training Speech Recognition software, and tediously integrating it into your app.

Alan Platform Diagram

The Alan Platform automates this with its cloud-based infrastructure — incorporating a large number of advanced voice recognition and Spoken Language Understanding technologies. This enables Alan to support complete conversational voice experiences — defined by developers using Alan Studio scripts, written in JavaScript. Alan integrates voice AI into any application with easy to use SDKs.

To show you the power of the Alan Platform, we’ll start by building our own simple voice script to define the experience; then, we’ll add it to a flutter application. The flutter application we’ll be using here is the Shrine App available on the Alan Github page.

To start, please download the Alan Flutter SDK here:

git clone https://github.com/alan-ai/alan-sdk-flutter.git

The application will be located in the examples/ directory and titled ShrineApp/

Now that we have our app saved on our computer, we can start with the voice script. Remember where this is saved — we’ll need to come back for it later!

Building Your Alan Application

First, sign up and create an Alan Studio account:

Next, log in to Alan and you’ll see the project dashboard. Here, we’ll create a sample project for the open source Shrine App for Flutter (which we downloaded before).

In the project, click the ‘Add Script’ button and select the Flutter_Shrine_Data, the Flutter_Shrine_Logic, and Flutter_Shrine_Questions scripts.

Make sure that the Flutter_Shrine_Data script is listed first.

To understand the Alan voice scripts, there are two essential features we need to know — intents and entities.

  1. Intents — the phrases which we want to recognize — phrases like “What can I do?” or “What is available?”
  2. Entities — the keywords in these intents. Product names or category aliases, for example, would be important specific words that are relevant to the functioning of the app.

In these scripts, Alan supports advanced language definition tools that can be used to make intents and entities of any complexity. Entities like lists loaded from databases or fuzzy entities are critical in many different use cases and can be handled by Alan’s advanced dialog management system.

Alan also supports UI navigation with voice commands — enabling users to navigate through the different screens in an application and creating a seamless user experience.

Now that we understand the basic parts of our script, we can move to debugging our voice experience in the application.

Debugging

Alan provides a host of features that makes debugging easy and efficient.

First, we can test out this script in the Debug Chat by pressing the Alan button and asking “What is available?”

Here, we can see that Alan replies back to the user and sends a corresponding visual update recognizing the “What is Available?” intent.

Many applications have complex workflows and could have dozens or hundreds of intents. While debugging, Alan lets you see which intents are available in the current context and what has occurred in the current dialog flow — showing the intent that was used. That makes your script easy to debug even with the most complex intents and user flows.

Finally, Alan provides a dedicated platform where we can test our application — Alan Playground. Available on Web, iOS, and Android, Alan Playground is another option to test your application alongside its visual contexts.

To debug on mobile, start by clicking the QR code button in the Alan Studio Debug Chat, and use Alan Playground on mobile to scan the code. This will scan your Alan Voice Script and open it in the application, which you can then test.

To test on Web, click the Alan Playground icon (Play Button) in the top right corner, and you can test your script on the next screen.

Once we’re done testing, we can create a new version of the Shrine App for Production!

Versioning

Alan supports versioning for development, testing, and production — helping you easily manage the process of adding the voice experience to your application. Publishing a new version is automated in Alan’s backend and will automatically link to all production devices, without requiring any manual deployment.

Our script here is saved to Development and Last (the only editable version). After debugging, we’ll save our voice script and move it to Production. Let’s name this script “V1” and select “Run on Production”.

To get our production key, navigate to the production section and select the “Embed Code” button.

At the top, we see our Alan SDK Key, which we will save to integrate our script into the application. Now that we have our full script setup and tested, we’ll integrate our voice script into the Shrine application.

Integration

Remember the Github repository we cloned?

Open the ShrineApp from Examples folder within that Github repository.

Locate the lib folder and open the pubspec.yaml file to add the Alan voice dependency:

Locate the lib folder and open the app.dart file to add the alan_voice package dependency:

Within app.dart, modify the initState() function to initialize the Alan button with your project id:

Open the ios folder with Xcode and confirm that the Display Name is ‘Runner” and that the versions for Flutter & Xcode are up to date:

Version of Flutter: 1.17.3 or later

Version of Xcode: 11.5 or later

After adding this code, you can run the Shrine app in an emulator or iOS device and start using Alan!

Conclusion

Only the Alan Platform gives you the ability to create a voice assistant that enhances your application’s existing user experience and make continuous improvements that precisely align with what your users want.

With its simplified format, Alan is accessible to any developer, and does the heavy lifting of creating an accurate language model and managing the dialogues so that you can Build, Debug, and Integrate a voice assistant into your app in just a few days.

Building with Alan is simple — the voice scripts are intuitive, scalable, and powerful. After developing your voice script, you can debug your scripts and take full control of your development-deployment stack. Then, you can integrate Alan into your application without making any changes to your existing workflow or UI. Finally, you can develop automated testing for future scripts and efficient deployment.

With Alan, make your applications hands-free and bring your users the best conversational voice experience.

More of a visual learner? Follow along with our overview video here!

For reference, view sample Alan projects and SDKs here: https://github.com/alan-ai

See Alan Documentation for additional information.

Refer to the following post for more information about Alan: https://medium.com/@alanvoiceai/voice-enabling-your-app-complete-guide-of-how-you-can-add-a-voice-assistant-to-your-existing-46ed72d972df

--

--

Alan AI
Alan AI

Written by Alan AI

Alan is a B2B Voice AI platform for developers to deploy and manage Voice Interfaces for Enterprise Apps.

No responses yet