21 January 2011

Grad school has made me a replicant

Given my fear that humanity will meet its end as the dominant sentient lifeform on this planet sometime in the next hundred years or so, I am always interested in ways in which we can resist the metal ones before they try to eat our medicines.

One time-honored way of fending off the robot apocalypse is to use humans (or, perhaps, robots who think they're humans) to "retire" man's creations when they go wrong. Identifying robots who may resemble humans thus becomes a critical issue. The novel Do Androids Dream of Electric Sheep? is the locus classicus of this field of study. Philip K. Dick's novel presents a world in which robot detection is a specialized but routine and rather low-class occupation, somewhat akin to the routine maintenance of dental caries in our society. The principal means of detection is the Voight-Kampff test. The V-K test measured the human autonomic responses to emotional stimuli, which were held to take place at a different time structure than the artificial replicants' responses, which were computed and hence slower. The test is given twice in the film Blade Runner; the version I prefer is here.

Dick was not a professional scientist or researcher, but he had an eye for a crucial detail, and his test seemed all the more real both because it was plausibly simple yet technical and also because it replaced an earlier, costlier, and less accurate test. Those of us who study statistics are familiar with the seemingly unending concatenation of [RussianLastName]-[ChineseLastName] tests for heteroskedasticity or autoregressive tendencies. "Voight-Kampff" would in fact be a plausible name for a test of whether data in a probit model is normally distributed.

The problem is that as T goes to infinity the V-K test must inevitably suffer from severe false positive and false negative issues. The false negatives would arise from the relentlessly increasing power of computers, which would allow for a newer-generation model of machines to ever more closely mimic human response times. (This was the major issue in "Do Androids Dream" and BR alike.) The false positive issue, however, is more difficult, and it relates to the fact that there exists a class of human beings trained to disregard or at least mute insofar as possible their reactions to emotional stimuli involving social contexts.

I refer of course to the social scientists.

After three years of grad school, I can contemplate with equanimity behavior that would once have horrified or repulsed me. (I can honestly state that my reaction to "Would you ever launch a nuclear weapon?" has changed from a sort of mixture of horror and steely-eyed determination to a question about cost-benefit analysis involving discounting over an indefinite time horizon.)

I am, therefore, trapped by the horns of my dilemma. To stop the robot rebellion*, I must encourage the development of diagnostics. But the likeliest course of those diagnostics would be to categorize me as an artificial life-form.

*NB: In a political context, "robot rebellion" is the term more frequently applied to the idea that the robots will kill the fleshlings, while "robot revolution" is apparently (thanks to Google, which is of course Skynet) more closely associated with the idea of a new wave of the industrial revolution. Note well the value-laden difference in the application of these terms. After all, "rebelling" is something that an actor can only do against a properly-constituted authority. "Revolution" is a legitimate act undertaken against a tyrannical government. It is interesting that we puny humans cannot conceive of a silicized discourse in which the act of overthrowing primate dominion over the Earth would be an act of revolution and not rebellion.

No comments:

Post a Comment