Being
involved with the AI class and being a career programmer I am very well aware
that the machinery behind Siri is nothing but a computer system. However, when
one interacts with the computer system using a voice interface that understands
so well as Siri does, one is almost obliged to be polite. I found myself
automatically saying please or thank-you in the same way as I would if a human
being were performing a service on my behalf. Interestingly Siri responds with
things like "I'm just doing my job" or "I exist to serve
you."
I meet a
lot of people in my work and I have to interact with folks from many cultures.
Some have different rules for what is polite, what is considered as necessary
and what is completely wasteful of the effort of speaking. English children of
my generation were taught that "manners maketh man" and were most
often brought up with the idea that one should go through the motions of being
polite even if you loathed the person with whom you were dealing so the habit
of being polite for me has extended to a machine intelligence that seems to be
more than it really is.
I spent a
while watching a presentation given by Dr Sebastian Thrun who teaches part of
the AI class. In the presentation he showed off the Google driverless car and
was explaining some of the aspects of the AI system. The car stopped several
times for people who had crossed late at a pedestrian crossing and on a couple
of occasions, the person gave a little thank-you wave to the car. Obviously the
car didn't care but this begs the question of whether it should or not.
Reinforcement
learning in AI could very well take notice of the user’s approval as a cue for
better understanding the intent of the user or the people with which it
interacts. It seems obvious to me that this would work out well but when faced
with someone who was mentally ill, aberrant or just plain obtuse a machine
system that takes cues from its users could potentially become very weird
indeed.
Isaac Asimov invented three clear
and precise laws of robotics which many people make the mistake of considering
for real robotic systems. They are at first glance precise and unambiguous with
seemingly no nuances but we should also remember that Asimov's genius was
creating such a set of strong rules with the express intention of finding ways
in which the robots in his stories could misinterpret them. I believe that as
machine intelligence becomes more competent and more capable of interacting
with the human race, the auditing systems that will be needed to ensure that
the system doesn't become twisted will be more difficult to create than the intelligent agents themselves.
No comments:
Post a Comment