More on Watson
Alva Noë writing at NPR:
The Watson System no more understands what’s going on around it, or what it is itself doing, than the ant understands the public health risks of decomposition. It may be a useful tool for us to deploy (for winning games on Jeopardy, or diagnosing illnesses, or whatever — mazal tov!), but it isn’t smart.
Which is a better way of saying what I meant yesterday in the comments section of James Joyner’s post on Watson when I wrote “I think a lot of this conversation conflates “knowledge” with “intelligence” and I would argue that the two are not the same.”
This is very much John Searle’s “Chinese Room” argument — which it really helpful from an objective/analyst viewpoint. It does cover up social notions of “intelligence” — the power of being able to “speak” (the Eliza Effect).
This returns us to a week ago and revolution v. coup. Like it or not, a lot of people do see “Watson” as smart (in much the way that, depending on your ideological bias GW Bush was either very smart or really dumb).
I do get concerned, on policy and funding levels, when people fail to recognize “A” and all too often assume “B” (see digital intelligence gathering vs human networks/feet on the ground).
We are miles from meaningful AI. Watson is meaningful in that other sense though, as a potential tool, augment for human intelligence