Some years ago, as a soon-to-be-ex-AI guy, I came to a realization that I immodestly named “Gordon’s Law”: it’s easier to program us to act like computers than it is to program computers to act like us.
If Gordon’s Law were not so, we would have voice recognition instead of “interactive” voice menus (“press 3 if you’ve despaired of leaving this menu”, etc.). We would have automatic Root Cause Analysis rather than trouble ticketing systems. We would have online advertising tailored to our wants and current projects rather than “personalization”.
To be sure, there is Watson, and there is Deep Blue, and my wife told me yesterday there’s some software competing for crossword puzzle champion of the world. But in some sense — and I include Siri here — these are parlor tricks. As Joseph Weizenbaum found out years ago with the software psychotherapist Eliza, there are some clever ideas that simulate humans to humans. They don’t wear well. There’s talk of having Watson do medical diagnosis, but there’s also talk of people wanting to throw their iPhones out the window when it turns out Siri really doesn’t do a very good job at all of understanding what we want or what we want to know. And if Watson ends of doing decent medical diagnosis, I’ll eat my hat.
Why should Gordon’s Law be true? Aren’t our brains “just” meatware? Isn’t everything, as Stephen Wolfram says, a computation?
I don’t know, but I do know that we work well together — information devices and humans — when we do what we’re each good at. We don’t pretend to be machines and they don’t pretend to be humans.