Friday, March 5, 2010

when will the computers get radically brilliant?

so i just read a piece by AI guru ben goertzel and compatriots written for a transhumanist magazine that concludes lots of current researchers believe that humanly or superhumanly intelligent robots are right around the corner (mid-century). oddly, the piece actually disputes this conclusion. we'll get to that in a moment.

the main data for the claim comes from a survey performed at the artificial general intelligence conference this year. it is worth noting that the folks who show up at the AGI conference are precisely the AI folks most likely to think that artificial intelligences can and will become "general" (equal to human intelligence or beyond). otherwise, why would they be there? even at this conference, 9 out of 21 respondents believed we will never see superhumanly intelligent machines and significant minorities doubted our ability to accomplish more modest goals as well.

in my own research, not all that many researchers either a) worry about this question or b) think it likely that such an outcome is likely any time soon (even though it is presumably possible). after all, as one roboticist at carnegie mellon univeristy's robotics institute put it to me, we don't understand the neural activity of lobsters (which have 214 neurons) it is rather silly to suggest we'll have human mental abilities understood and/or replicated in the next twenty years (as many people predict, following upon the work of hans moravec and, later, ray kurzweil).

wonderfully, though, the authors point toward the fact that in a survey done outside of the AGI conference, 41% of respondents believed that human/superhuman ability was "more than 50 years off" (which includes a really, really long stretch of possible time frames) and another 41% believed such ability would never be achieved. this means that 82% believe that such technology is either impossible or of indeterminately long time away. the authors tell us that these numbers, like their own data at the AGI conference "suggest that significant numbers of interested, informed individuals believe it is likely that AGI at the human level or beyond will occur around the middle of this century, and plausibly even sooner." this conclusion is simply without merit, as their own data are far more optimistic than the 82% who think that AGI is far off or impossible!

that said, we should all get behind goertzel, et al., who do provide one meaningful conclusion: "these days, the possibility of 'human-level AGI just around the corner' is not a fringe belief. it is something we all must take seriously." whether transcendently intelligent computers are coming or not is beside the point; the authors are absolutely correct that these ideas play a serious role in contemporary culture. let's not miss them as we wander around in our rose colored glasses.

1 comment:

  1. I think this also begs the question, do we actually need to understand how the human brain works before we create something equivalent to human level artificial intelligence? We might create something that is a functional equivalent or find something where the sum is much greater than the discrete parts. However, I do agree, even if it isn't possible atm, the idea itself clearly does play a role in contemporary culture and has done so for a while now.