I don’t get the point of the Google Assistant AI’s ability to make phone calls like a human being.
If AI is now competent enough to make realistic phone calls, AI is also competent to receive phone calls. Which means that higher the adoption of AI to make phone calls, the higher also will be the adoption of AI to receive calls. The era of AI making phone calls to humans, on the other hand, will be a very short one.
So why bother making them sound human at all? Why not simply have an AI-to-AI interface that can connect two digital assistants to each other whenever necessary? That will be faster, less clumsy, and certainly much more efficient.
I can think of two reasons (there may be more). One is that Google’s afraid of resistance from humans who’re increasingly worried about AI replacing human activity. A direct AI-to-AI interface may be more efficient, but also much scarier for people to deal with. It’s much nicer to deal with an AI who looks and talks like you do, but doesn’t threaten to displace you or conduct weird conversations with other AI in coded language you don’t understand.
The way this was marketed by Sunder Pichai at Google I/O points to this. The Assistant was shown as a customer’s tool, able to interact with humans who are working in small businesses. The Assistant’s ability to act as a business tool, replacing the human at the workplace was played down, almost to the point of non-existence.
The other probable reason Google’s taking this route is that Google itself is worried about an AI-to-AI interface. Over the last couple of years, more and more tech experts (including CEOs like Elon Musk and to an extent, Pichai himself) have spoken out against the dangers and unknown implications of Artificial Intelligence. There’s an awareness that as AI systems interact more with each other without human oversight, human beings increasingly lose control over the process. There’s no telling what AI-to-AI interaction can result in, and some results may be dangerous to humans.
As a result, Google may feel compelled to bind its AI systems into discrete, discernible packages that don’t directly interact with each other via digital code. Instead, it may have decided that a clumsy process where AI systems convert digital signals into analog ones and vice versa may be easier to observe, understand, and intervene in.
I’m not a tech professional, so a lot of my speculation is basically made from Plato’s cave, observing shadows cast by digital giants on the walls that are my newsfeeds and social media channels. I’ve no direct insight into Google’s design and marketing decisions.
Nevertheless, I wouldn’t be surprised if Google’s decision to push this strange humanised assistant as an AI product is prompted by a mixture of apprehension about human reactions, apprehension about AI evolution, and a desire to make profits from AI all the same. Meanwhile, the rest of us have to treat this with caution – neither euphoria nor gloom-and-doom pessimism will work. AI development is still very nascent and there’s no saying where it will go from here. We need to remain alert to many possibilities.
PS – Were that hair salon assistant and restuarant phone operator aware they were being contacted by AI? I certainly hope so and I hope Google took permission from them to reproduce their recorded conversations. If not, it raises questions about research ethics.