Configure Voice-Assisted Search

You can implement a voice chatbot using Smart Answers to integrate your search platform with a voice interface application. This procedure uses Google Dialogflow as an example, but you can use other voice interface platforms that support the following:

  • Speech-to-text translation

    Translate the customer’s speech into a text-based query for Fusion.

  • Workflow routing

    Route queries to your Smart Answers query profile.

  • Text-to speech translation

    Translate Fusion’s text response into speech for the customer.

Voice search data flow

The high-level steps will be similar for any voice interface application:

  1. Enable Smart Answers on your Fusion platform.

  2. Configure your voice interface application to perform speech-to-text on voice queries.

  3. Configure your voice interface application to route queries to the Fusion query profile associated with your Smart Answers query pipeline.

  4. Configure your voice interface application to perform text-to-speech on Fusion’s responses.

In this example procedure, we provide Javascript code that you can copy and customize, which performs the routing to your Fusion cluster.

1. Enable Smart Answers

To enable Smart Answers, you need a trained machine learning model, a question-and-answer body of content, and a query pipeline configured to apply the machine learning model to the content.

See Smart Answers: Getting started for detailed steps to enable Smart Answers.

2. Perform speech-to-text on voice queries

In the case of Google Dialogflow, this functionality is enabled by default whenever you create a Dialogflow agent. See your voice interface platform’s documentation to find out how to enable speech-to-text for your application.

3. Route voice queries to Fusion

The procedure below shows you how to configure a Google Dialogflow agent as an example to demonstrate one possible configuration for routing voice queries to Fusion. For other voice interface platforms, check your documentation.

In this example, we’ll configure the agent so that all intents use a webhook, for which we provide a code sample.

  1. Make sure you have created a billing account for Google Dialogflow, in order to enable all the required features.

  2. Download the voice.js file and update the variables below to match your Fusion environment:

    <code>voice.js</code> variables

    If your Fusion response field is not called answer_t, then you also need to update the field name here:

    Update the answer field name

  3. In your Google Dialogflow agent interface, navigate to Intents.

    A newly-created Google Dialogflow agent contains two default intents:

    • Default Fallback Intent

    • Default Welcome Intent

    Note
    The voice.js code includes a handler for each of the default intents, plus "Try again" and "Yes" intents (which you’ll create below). If your application includes additional intents, you may need to add those to the code.
  4. Navigate to Intents > Default Fallback Intent and configure it as follows:

    1. Delete all of the items in the Context, Events, Training Phrases, Actions, and Response sections.

    2. Under Fulfillment, select Enable webhook call for this intent.

      Enable webhook call

    3. Click Save.

  5. Click Default Welcome Intent and configure it as follows:

    1. Delete all of the items in the Training Phrases and Responses sections.

    2. Under Fulfillment, select Enable webhook call for this intent.

    3. Click Save.

  6. Navigate to Intents > Create Intent, call the new intent "Try again" (case-sensitive), and configure it as follows:

    1. Delete all of the items in the Context, Events, Actions, and Response sections.

    2. Under Training Phrases, add "Try again" (case-sensitive).

    3. Under Fulfillment, select Enable webhook call for this intent.

    4. Click Save.

  7. Navigate to Intents > Create Intent, call the new intent "Yes" (case-sensitive), and configure it as follows:

    1. Delete all of the items in the Context, Events, Actions, and Response sections.

    2. Under Training Phrases, add the following phrases (case-sensitive):

      • "Try again"

      • "Yep"

      • "Perfect"

    3. Under Fulfillment, select Enable webhook call for this intent.

    4. Click Save.

  8. Navigate to Fulfillment and configure it as follows:

    1. Select Enable Inline Editor.

    2. Overwrite the default code with the voice.js code that you customized in step 2 above.

      Inline code editor

    3. Click Deploy.

Note
In order to enable external API calls to Fusion, you must upgrade your Google Firebase billing account to the "Pay as you go" plan. Google Firebase is also where you can view the console logs for these API calls.

4. Perform text-to-speech on Fusion’s responses

  1. In your Google Dialogflow agent interface, click the gear icon next to your agent’s name.

  2. Under Automatic Spell Correction, enable Allow ML to correct spelling of query during request processing.

  3. Under Text to speech, select Enable Automatic Text to Speech.

  4. Click Save.

5. Test your configuration

  1. In your Google Dialogflow agent interface, click the Google Assistant link in the right-hand sidebar.

    Google Assistant link

  2. Click Talk to <your agent name>.

  3. Click the microphone and ask a question.

    If you don’t receive a response that makes sense, check the logs in Google Firebase for information you can use to troubleshoot your configuration.

Tip
Once your voice chatbot is working, you can click Integrations to access a pre-built widget that you can paste into the code for your front-end application.

See the Google Dialogflow documentation for more details.