Building secure voicebot services

Sander Nieuwenhuis

When building a voicebot solution, two of the questions that always pop up are: “… and what about privacy?” and “how secure is the voicebot solution?”. Great questions! And luckily, many organizations understand the need to address these questions themselves, instead of waiting for a solution provider partner like us to address them. It shows that these organizations are aware that both security and privacy are critical topics.

So, what is the role of privacy and security in voicebot solutions? I’ll explain that in this blog post, and provide you with insights into how to prepare your organization for these issues.

The structure of a voicebot

Voicebots typically use cloud services, each performing one specific task in the overall voicebot landscape. A typical voicebot setup looks something like this:

voicebot1

In this picture, you can see the Voice Device (Google Home in this case), connected to the appropriate Voice Cloud service (Google Cloud). When you say the key word (“Hey Google”), the device starts recording your command and sends it to the associated Voice Cloud. The Voice Cloud service processes the actual voice command into a text string (‘text-i-fied voice’). This string is a text you and I can read, but a computer cannot understand.

To interpret the string, it is sent to the Interpretation service, which converts the string into intents and entities: which action the user wants, and for what object the action needs to be performed. For example: in “tell me the time”, the action is ‘tell me’ and the entity is ‘the time’.

After the Dialog Flow has interpreted the voice command, it is send to a Conversation Hub. This is the actual custom service performing the business logic. This might include looking up data in a Data Store connected to the Conversation Hub, or connecting to an organization’s back-end service to look up specific product information, customer information or news messages.

Once the Conversational Hub does the processing, the result is sent back via the Dialog Flow service to the Voice Cloud, in a text form. The Voice Cloud translates the text into audio (‘voice-i-fied text’). Finally, this audio is sent back to the Voice Device to be played to the user. A lot of steps that happen with almost no delay.

The principles of privacy

In regards to privacy, it is very important to realize that the voicebot structure is divided into two very distinct responsibility areas:

voicebot2

On the left, you see David, the user of the voicebot service. He is owner of the Voice Device. He installed the device and agreed to the user agreement from the supplier of the Voice Device and Voice Cloud. Anything that is happening in this blue domain is his responsibility. This might include agreeing to allow Voice Device/Cloud providers to listen to his conversations to improve their voice services.

On the right, you see Mary, the Voicebot Engineer. She works with the organization that delivers the voicebot service. She is responsible for anything happening in the green domain, including privacy and the security of the data being processed.

Because the voicebot service is built from various cloud services, Mary needs to check if she can agree to the privacy terms of the service being used. For example: the privacy terms of Google Dialogflow and cloud services in which the Conversion Hub is hosted. Important questions she must ask include: what do the suppliers do with the data being processed? Where is the data physically stored? And who can actually access the data?

In regards to privacy, it is Mary’s responsibility to, for example, apply GDPR (the EU privacy legislation) if privacy data is processed in the voicebot. And this will include implementing proportional security and Data Subject Rights: the right of a user to access, delete or change privacy data, and applying proportional security to the privacy data.

It is a good idea to publish the overall privacy terms of the voicebot, so users of the voicebot service know what happens with their privacy data, and how they can exercise their Data Subject Rights, for example. It is an even better idea to get explicit consent before the voicebot is used, although it is still tough to do this in a user-friendly way. It is not recommended that you get your voice device to read out the entire list of service terms, but you maybe can do a short version and refer to a website for the full version.

The security side

In addition to privacy, Mary is also responsible for security. She needs to make sure that all data is sent and stored in a secure way. Some of the services that make up the voicebot architecture are APIs that need to communicate in a secure way. At the very least, the data must be sent securely (HTTPS) and the services need to know that they can communicate directly with each other without any hackers eavesdropping on the conversation (using API keys).

Mary also needs to review the security & service terms of the services being used: are they, for example, ISO27001 or SOC2 certified to ensure the services are reliable? Note that security is more than just confidentiality: availability of the services and integrity of the data processed are also elements to check.

Security is also an element that needs to be managed. For example, Mary needs to periodically check the HTTPS configuration and rotate the API keys to maintain security. She also needs to manage and check access rights of engineers with access to the environment and, for example, withdraw access rights of engineers not working on the voicebot service anymore. Also, back-ups and restore tests are part of the security management activities.

Embracing the voice

Voicebots are a new and good example of the use of (SaaS) Cloud Services. Unlike what we were used to in data center environments, you no longer need to secure a server configuration. You configure and manage individual services, each performing a specific task. For a reliable service, all these services need to work together, and need to be secure both individually and as a chain. So instead of being an engineer, Mary needs to be a service manager who is able to oversee the entire network of services. To guarantee privacy, security and the overall functionality of the voicebot.

At Mirabeau - A Cognizant Digital Business, we have already guided many organizations in security and privacy aspects of conversational interfaces and voice services. We have different workshops and programs to address both possibilities and risks: we provide insight into data protection regulations, like GDPR, and help you interpret possible risks. Further, we can advise and create well thought-out security designs using our proprietary security and privacy frameworks: we analyze, explain and visualize data information flows and provide solutions to your data protection concerns.


Do you need any assistance in solving the privacy and security puzzle, or just want to know more about these aspects in conversational interfaces or voice assistants? Drop us a line! Or get in touch with me at snieuwenhuis@mirabeau.nl.