The Coming Revolution of Voice Control With Artificial Intelligence
As consumer devices become more capable, with voice control assistants such as Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and Google’s Assistant, it is only natural to expect these artificial intelligence (AI) applications to move into more business settings. These capabilities could emerge in a number of areas, such as voice control-style interfaces on common productivity applications and voice output to warn IT managers of potential infrastructure faults or breaches.
This makes sense — after all, there is already an AI-enabled toothbrush that learns exactly how you clean your teeth and offers ways to improve your dental hygiene. It is just a matter of time before major industry players develop B2B apps with similar smart features.
Understanding Voice Control Risks
Today, voice-enabled products can be used for mundane tasks, such as dictating emails or text messages, querying internet searches or playing music and TV shows. This is just the beginning. Industry expert Tristan Louis predicted that artificial intelligence “will become a required component of every technology offering.” But before you take the leap, there are some security implications to consider.
First, app designers must understand and cultivate the proper development environments to build these new, voice-control apps. All four of the aforementioned vendors are working to expand their respective reaches. They include application programming interfaces for their voice interfaces and have endeavored to build in better voice support for various operating systems.
Amazon, for instance, launched its Alexa Voice Service and offers a $300, stripped-down development kit from Conexant for building Alexa voice apps. Google, meanwhile, implemented Actions API to build apps for its Assistant using Node.js and the Google Cloud SDK. Similarly, Apple’s SiriKit enables iOSv10 apps to work with Siri voice prompts, and adds the ability to accept payments and ride sharing appointments. It makes use of the standard Apple Xcode programming language and the Intents programming framework. Finally, Microsoft’s Cortana Development Center contains the programming extensions to enable its voice interface.
If you are already using these basic development environments, you have less to learn than those starting fresh with no previous coding experience. Each development environment is, for the most part, its own island — although Microsoft has a converter to transfer Alexa code into something that Cortana can use, along with a way to broker connections from its Bot Frameworks to voice-enable them as well.
Challenges for Organizations
These development environments are just the tip of the coding iceberg. Google, Apple, Intel and Qualcomm have their own separate alliances for home-connected IoT devices that may also be applicable to business devices. All these competing standards will keep voice-enabled AI assistants very segmented in the short term and force developers to pick an emerging standard.
If your voice projects get going, you might need to build a new department in your organization to coordinate them. Andrew Ng, speaking at a recent Fortune Brainstorm Tech dinner at the Bellagio Hotel in Las Vegas, explained that CEOs need a chief AI officer.
“If you have a lot of data and you want to create value from that data, one of the things you might consider is building up an AI team.” he said. That might be a challenge, however, especially given the widespread shortage of IT resources.
Privacy and Security Issues
Finally, voice-powered AI apps introduce issues with both privacy and security. We Live Security published a series of suggestions to prevent voice-controlled systems from automatically buying items from Amazon in response to a well-publicized story in which a child bought a dollhouse using Alexa commands.
But that’s on the tamer end of the spectrum. It isn’t well-known how much information and recorded audio is collected or how it will be used by Amazon, raising serious privacy concerns. Industry analyst Shelly Palmer said he is fearful of the data collected from these assistants.
“It will make what they do with our current behavioral profiles look like primitive data processing,” he wrote in a LinkedIn post. “The world will be a very different place.”