Four points and a note following the demonstration of Google Duplex:
1. Is it OK to deceive people?
We should make AI sound different from humans for the same reason we put a smelly additive in normally odorless natural gas.
The title for this post is part of a quote from Stewart Brand:
2. Is this the best way for a computer to communicate?
I realize that Google Duplex is a lot like Isaac Asimov’s reasoning in his essay “Should Home Robot Be Like a Person?“, where he argued that for a robot to step into the role of, and effectively replace, a human, its shape and articulation would have to be that of a human.
But I think there is a much better solution to booking a table or scheduling an appointment than having a computer call a human, communicating like a human. Wouldn’t it be better, safer and essentially more trustworthy, for the computer to communicate directly with another computer?
I mean, I recently booked a table at a restaurant (Le Baligan in Cabourg, France – highly recommended) using their website. Wouldn’t my virtual assistant be put to better use analysing the webpage to find the booking interface?
3. Who knows what is at the other end of the line?
Now for the fun part. Google Duplex will obviously also be put in place at the receiving end, so we will end up with two computers talking to each-other, each pretending to be human, talking and listening like humans. I mean, what could possibly go wrong?
4. Are quick replies any more genuine?
This sometimes appear in my Gmail app when I write with my friends:
These quick replies are supposed to make my communication easier and faster, but if I choose one of them, I am sending a message I didn’t write.
There is a reason I don’t call it AI. It’s not, it’s Machine Learning.