Conducted on stage at I/O 2018 in May, the first part of the demonstration amazed audiences with Google Assistant combined with the Duplex AI voice system being developed by the company. The virtual PA used the system’s female voice to make an appointment at a hair salon. Even more significant than the successful booking was the fact that the salon assistant was unaware they were conversing with a computer.
Less successful but no less impressive was the second part of the demonstration, in which the virtual PA used the system’s male voice to try reserve a table at a restaurant. After misunderstanding the system’s spoken requests, the restaurant employee said a book was not required for the intended number of guests. This call is one of the reasons Duplex passing the Turing Test remains a goal for Google.
Measuring AI: the Turing Test
Computing Machinery and Intelligence, a 1950 paper by Turing when based at the UK’s University of Manchester, contains the benchmarks used ever since to measure AI. Essentially, the Turning Test measures intelligent behaviour based on whether the machine’s behaviour can be distinguished from that of a human.
The scientist envisioned this being done by having a human judge the conversations between a computer and a person. If the one evaluating the conversation cannot distinguish between the participants, the AI passed the test, regardless of whether the content of its text was factually correct or not.
In the same paper that introduced the test, Turing wondered whether it would be possible to design digital computers capable of imitating humans. He also predicted that computers would be capable of human-style conversation realistic enough to convince humans as much as 30 per cent of the time.
His prediction almost came true in 2014, when the Eugene Goostman chatbot impersonated a teenage computer programmer using text-based conversation. It was almost realised again by Google Duplex this year. With so much progress being made, it is not unreasonable to think that in the not-so-far-off future we could see Blackjack dealers at a casino replaced by artificially intelligent computers capable of having conversations, or customer service agents becoming entirely AI based.
The Voice of Google AI
Duplex could give a human-like voice to Google Assistant. It operates on a recurrent neural network created with TensorFlow Extended, as well as on the tech giant’s speech recognition software.
According to the company principle engineer Yaniv Leviathan and engineering vice-president Yossi Matias, the software also draws on the history and topic of conversation, as well as audio. It also makes use of TensorFlow Extended’s hyperparameter optimisation. The system is self-monitoring, and seldom requires human assistance for the completion of tasks.
The result is technology that is capable of short conversations that sound natural. That even goes for the intended removal of verbal triggers such as OK Googles from shorter conversations.
Speaking at the demonstration, company CEO Sundar Pichai suggested that Google Assistant and Duplex could become a common way of making calls. He also stressed that the demonstrations were recorded as they happened, and said neither the salon nor the restaurant employee had any idea computers were involved.
Even though the conversations were not long enough to have passed the Turing Test, we are close to seeing it done, and in ways unimagined by the scientist who created the AI test. The future is an exciting place, and it is almost here.