[[extra-curricular]]
[ACM suggests that some percentage of time in all Undergraduate CS courses should be spent on discussing
ethics. May be this will fill that role... At any rate, I have been sending some version of the following since Fall 2003, and I see no reason
[ACM suggests that some percentage of time in all Undergraduate CS courses should be spent on discussing
ethics. May be this will fill that role... At any rate, I have been sending some version of the following since Fall 2003, and I see no reason
to break the tradition this year ;-) ]
We talked a lot about the role of biases in making learning feasible. When the kid jumps to the conclusion that the
whole big thing his mommy is pointing to and crying "BUS" must be the bus, or when you assume that the rabbit-like thing
that jumped into your line of vision, as you stood in the African savannah with a masai irrationally screaming "GAVAGAI" in your ears, must
be gavagai, you seemed to be making both computationally efficient and correct generalizations.
Inductive generalizations are what allow the
organisms with their limited minds to cope with the staggering complexity
make rapid "fight or flight" decisions, and they had to do biased
learning to get anywhere close to survival.
So, after the wisdom of this class, should we really wear complaints of biases in our behavior
as badges of honor?
Hmm.. Where does this leave us vis-a-vis
stereotypes and racial profiles--of the type
"all Antarciticans are untrustworthy" or "all
Krakatoans are smelly" variety.
Afterall, they too are instances of
our mind's highly useful ability to induce patterns from limited
samples. How can we legitimately ask our mind not to do the thing it is so darned good at doing?
So, what, if any, is the best computational argument against stereotyping?
One normal argument is that the stereotype may actually be wrong--in
other words, they are actually wrong (non-PAC) generalizations, either
because they are based on selective (non-representative) samples, or
because the learner intentionally chose to ignore training samples
disagreeing with its hypothesis. True, some
stereotypes--e.g. "women can't do math", "men can't cook" variety--are of this form.
However, this argument alone will not suffice, as it leaves open the
possibility that it is okay to stereotype if the stereotype is
correct. (By correct, we must, of course, mean "probably approximately
correct," since there are few instances where you get metaphysical
certainty of generalization.)
What exactly could be wrong in distrusting a specific Antarcitican because
you have come across a large sample of untrustworthy Antarciticans?
I think one way to see it is perhaps in terms of "cost-based
learning". In these types of scenarios, you, the learning agent, have
a high cost on false negatives--if you missed identifying an
untrustworthy person, or a person who is likely to mug you on a dimly
lit street, or a person who is very likely to be a "bad" employee in
your organization, your success/survival chances slim down.
At the same time, the agent has much less cost on false positives, despite
the fact that the person who is classifed falsely positive by your
(negative) stereotype suffers a very large cost. Since the false
positive *is* a member of the society, the society does incur a cost forthe fact that the person who is classifed falsely positive by your
(negative) stereotype suffers a very large cost. Since the false
your false positives, and we have the classic case of individual good
clashing with societal good.
This then is the reason civil societies must go the extra mile to
discourage acting on negative stereotypes, so we do not round up all
antarciticans and put them in bootcamps, or stop all Krakatoans at
airport securities and douse them with Chanel 5. And societies, the
good ones, by and large, do, or at least try to do. The golden rule,
the "let a thousand guilty go free than imprison one innocent", and
the general societal strictures about negative streotypes--are all
measures towards this.
You need good societal laws (economists call these "Mechanism Design")
precisely when the individual good/instinct clashes with the societal good.
So, you are forced to learn to sometimes avoid acting on the highly
efficient, probably PAC, generalizations that your highly evolved
brain makes. I think.
Yours illuminatingly... ;-)
Rao
Epilogue/can skip:
It was a spring night in College Park, Maryland sometime in
1988. Terrapins were doing fine. The Len Bias incident was slowly
getting forgotten. It was life as usual at UMD. About the only big
(if a week-old) news was that of a non-caucasian guy assaulting a
couple of women students in parking lots. I was a graduate student,
and on this particular night I did my obligatory late-evening visit to
my lab to feign the appearance of some quality work. My lab is towards the edge of the campus;
just a couple more buildings down the Paint Branch Drive, and you get
to the poorly lit open-air parking lots.
On that night I parked my car, walked down the couple of blocks to my
lab, only to remember that I left a book in the car. So, I turned, and
started walking back to the parking lot. As I was walking, I noticed
that this woman walking in front turned a couple of times to look back at me. I remembered
that I had passed her by in the opposite direction. Presently I
noticed her turning into the Cryogenics building, presumably her
lab. As I passed by the cryo lab, however, I saw the woman standing
behind the glass doors of the lab and staring at me.
Somewhere after I took a few more steps it hit me with lightning
force--I was a false positive! The woman was ducking into
the lab to avoid the possibility that I might be the non-caucasian
male reportedly assaulting campus women. I knew, at a rational level,
that what she was exhibiting is a reasonably rational survival
instinct. But it did precious little to assuage the shock and
diminution I felt (as evidenced by the fact that I still remember the
incident freshly, after these many years.).
So, you are forced to learn to sometimes avoid acting on the highly
efficient, probably PAC, generalizations that your highly evolved
brain makes. I think.
Yours illuminatingly... ;-)
Rao
Epilogue/can skip:
It was a spring night in College Park, Maryland sometime in
1988. Terrapins were doing fine. The Len Bias incident was slowly
getting forgotten. It was life as usual at UMD. About the only big
(if a week-old) news was that of a non-caucasian guy assaulting a
couple of women students in parking lots. I was a graduate student,
and on this particular night I did my obligatory late-evening visit to
my lab to feign the appearance of some quality work. My lab is towards the edge of the campus;
just a couple more buildings down the Paint Branch Drive, and you get
to the poorly lit open-air parking lots.
On that night I parked my car, walked down the couple of blocks to my
lab, only to remember that I left a book in the car. So, I turned, and
started walking back to the parking lot. As I was walking, I noticed
that this woman walking in front turned a couple of times to look back at me. I remembered
that I had passed her by in the opposite direction. Presently I
noticed her turning into the Cryogenics building, presumably her
lab. As I passed by the cryo lab, however, I saw the woman standing
behind the glass doors of the lab and staring at me.
Somewhere after I took a few more steps it hit me with lightning
force--I was a false positive! The woman was ducking into
the lab to avoid the possibility that I might be the non-caucasian
male reportedly assaulting campus women. I knew, at a rational level,
that what she was exhibiting is a reasonably rational survival
instinct. But it did precious little to assuage the shock and
diminution I felt (as evidenced by the fact that I still remember the
incident freshly, after these many years.).
yourself sometime in your life...
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.