This being the first book review I've put on the blog, I've chosen a book that will get wide circulation in tech circles written by someone whose thinking I've followed for nearly a decade now. Jaron Lanier is one of the fathers of virtual reality research and commands enormous respect from researchers into VR and Internet technology more broadly. In 2010, he made Time magazine's list of top 100 folks who affect our world. He's also spent the last few years preaching against the beliefs advocated by Ray Kurzweil and others (whose ideas are documented in my book, which takes a position of agnosticism on the morality of or likelihood of the fulfillment of the Apocalyptic AI agenda).
In short, Lanier believes that supremely intelligent computers are a) highly unlikely and b) not worthy of empathy. Instead of worshiping at the altar of robotic progress, Lanier argues we should be using technology to advance human interpersonal relationships. His early ideas were distributed through the Edge listserv and can be read here and here.
Lanier's new book, You Are Not a Gadget, advances these basic themes and explores how choices made in technology design have serious moral and social consequences, some of which he would like to reverse.
The book is, for the most part, a clearly written exposition which will be comprehensible to the lay reader and technocrat alike. Lanier's lucid English is, quite frankly, a delight compared to the work of many present day intellectuals, who seem to think they should model their exposition upon the likes of Foucault, Bourdieu, Derrida, or Adorno (in short, whichever impossibly dense theorist they choose to represent their interests).
Lanier's fundamental premise is that many of our technological choices right now depersonalize human interaction (think of how an individual is reduced to a set of checked off preferences at a social networking site or the fact that people exist with thousands of alleged Facebook "friends"). Such technological designs are not self-determining, however, and Lanier feels that we can reinvigorate our technological and social lives by designing tech environments that emphasize what he calls the mystical aspects of human personhood and the development of and social capitalization upon personal creativity (a partial rejection of hive mind and group think mentalities, which are not universally useful). He offers constructive advice on how this might be accomplished and his vision is a valuable contribution to discussions about life in a digital culture. Lanier emphasizes how digital life can be cruel in a world of "drive by anonymity" and would see that world reconfigured to advance personal responsibility for ideas and words. Likewise, he pushes for an acceptance of human persons as semi-mystical (and thus not reducible to the level of a machine) and capable of postsymbolic communication through advanced technological interfaces. I appreciate that he seems to have acknowledged that his own position is semi-theological without the knee-jerk reaction some people have to religion.
The book is definitely excellent, though I found myself frustrated by one concern in particular. Given Lanier's repeated emphasis upon personal, individualized creativity, I find it disconcerting that he cites few sources for many of his claims and interpretations (and I confess to wondering whether a very brief e-mail interchange I had with him in 2007 about my Apocalyptic AI ideas led to his claim--for the first time that I'm aware of--that what he calls cybernetic totalism is a religious system ...see pages 18-25 or so of You Are Not a Gadget). While I recognize that a book written for a popular audience cannot be a rigorously documented as an academic work, I would have liked a bit more referencing throughout (which could have just taken the form of mentioning people and their work).
Fundamentally, I am in accord with Lanier. I do not believe that technological progress is self-determining. The accidents of history and personal choices of real individuals shape our future. We should advocate that individuals pay attention to the world in which they live, the world in which they wish they lived, and the ways in which technology plays out in that distinction. If we can attend to the design choices in our lives (and this is perhaps especially true for major content creators), we can work toward the betterment of society.
Monday, May 31, 2010
Friday, May 21, 2010
games & learning
last night a friend of mine told me that her son is among the first group of students at the new quest to learn school here in new york city. i had not heard of the school before but am very impressed at the ways in which the folks putting the school together seem committed to taking advantage of our evolutionary predilections to offer a real learning opportunity.
my wife and i are home schooling our children precisely because we bemoan the ways that school steals the fun and excitement out of learning. should the q2l model take off, perhaps we can save education in this country after all. learning ought to be a game...challenges that demand of us a combination of data acquisition, pattern recognition, organized expression, and novelty are the ones that we engage enthusiastically and effectively. they are the ones that will receive our loyal efforts even when the problems are difficult. games can be excellent at this. the sheer amount of things i learned from my interest in professional baseball and football as a child continues to surprise my wife and i think my involvement in dungeons & dragons probably did much to encourage certain ways of thinking that continue to benefit me now as a researcher. that's just two kinds of gaming situations that promote long-term learning and manipulation of what is learned in new and sometimes very helpful ways.
the q2l school asks students to learn through games, helping them to understand complex, dynamic relationships in scientific and social systems. the school's approach immediately calls to mind jane mcgonigal's faith that video games can save humanity (as widely publicized in this video).
mcgonigal argues that in video games we are inspired to work hard and to collaborate with one another. they encourage us to feel like we have something to offer and the ability to do well. playing video games is training the youth in optimism, forming social bonds with fellow players, blissful productivity, and the development of a sense of meaning. all of these, she argues, are precisely what we need as a society. several of the games she's involved in actually change people's behavior. an oil shortage game, for example, has led to different usage patterns among the players. thus a conservationist game can provide a conservationist attitude more broadly. her thesis is, of course, a dramatic and debatable one. all good theses are.
in some sense, the q2l school should test some of mcgonigal's ideas as well as some more pedagogical issues about how people learn and what enables them to do so effectively. i don't know much about the school yet, but i look forward to reading more as it develops in the coming years.
my wife and i are home schooling our children precisely because we bemoan the ways that school steals the fun and excitement out of learning. should the q2l model take off, perhaps we can save education in this country after all. learning ought to be a game...challenges that demand of us a combination of data acquisition, pattern recognition, organized expression, and novelty are the ones that we engage enthusiastically and effectively. they are the ones that will receive our loyal efforts even when the problems are difficult. games can be excellent at this. the sheer amount of things i learned from my interest in professional baseball and football as a child continues to surprise my wife and i think my involvement in dungeons & dragons probably did much to encourage certain ways of thinking that continue to benefit me now as a researcher. that's just two kinds of gaming situations that promote long-term learning and manipulation of what is learned in new and sometimes very helpful ways.
the q2l school asks students to learn through games, helping them to understand complex, dynamic relationships in scientific and social systems. the school's approach immediately calls to mind jane mcgonigal's faith that video games can save humanity (as widely publicized in this video).
mcgonigal argues that in video games we are inspired to work hard and to collaborate with one another. they encourage us to feel like we have something to offer and the ability to do well. playing video games is training the youth in optimism, forming social bonds with fellow players, blissful productivity, and the development of a sense of meaning. all of these, she argues, are precisely what we need as a society. several of the games she's involved in actually change people's behavior. an oil shortage game, for example, has led to different usage patterns among the players. thus a conservationist game can provide a conservationist attitude more broadly. her thesis is, of course, a dramatic and debatable one. all good theses are.
in some sense, the q2l school should test some of mcgonigal's ideas as well as some more pedagogical issues about how people learn and what enables them to do so effectively. i don't know much about the school yet, but i look forward to reading more as it develops in the coming years.
Tuesday, May 11, 2010
digital souls at the ARC
this past week, i joined a few other folks on a panel at this month's meeting of the society for the arts, religion and culture to talk about "digital soul: artificial intelligence, augmented reality, and the fate of religion and the arts" and, as on my first event with ARC (3 months ago) had a very good time.
the four of us on the panel were: anne foerst, a theologian who writes about the implications of robotics for thinking through human personhood and our religious obligations, siona van dijk, former director of the gaia community, and gina bria, an anthropologist interested in ritual and technology, particularly with respect to families, and me (officially there to talk about video games).
my friend anne (she's german, so it's pronounced much like "anna") started the conversation by discussing how sin is a condition of estrangement and that to feel estranged from machines (to automatically discount them as persons) would, therefore, be sinful (just as discounting the personhood of disabled persons (on the grounds of senility, disability, youth, etc. is sinful). while i confess to have disputed her position on whether or not the MIT robot leonardo is "self aware," i find her politico-theological goal reasonable enough. leonardo has passed the sally-anne test for self-awareness but i'm pretty sure that the only way that a person establishes his or her self-awareness by passing that test is when the rest of us make ourselves a little less aware and a little more stupid. measuring something as complex as self-awareness through one simple test indicates the foolishness of the tester, not the awareness of the tested. that said, she was provocative and fun.
i then spoke about video games, presenting my position that while there might be some losses associated with souls (whatever those happen to be...i'm agnostic on the subject) in a world with increasing identification with video gaming, there are definite possible gains too, in the kinds of companions (AI and human), communities, and self-identities made possible in video game cultures.
then siona spoke about the online world's lack of utopia and her belief that escapism fuels technology. i'm pretty much on record rejecting the idea that virtual technologies are simple escapism but there's no doubt that she made a thoughtful engagement with some of the possibilities inherent in technology and the uses to which many folks put it.
finally, gina discussed the nature of soul, and whether such a thing can be thought of as digital (katherine hayles's book on this subject, for all it's impossibly over-convoluted writing, remains the best text on the subject). she argued that technology is an extension of the human person into previously unreachable spaces and wondered whether it was even possible for technology to lack "soulfulness" given its origin in human creativity.
after we'd given our opening remarks, there was a 70 or 80 minute discussion of various ideas, most of which were woven out of the issues of machine intelligence and presence in virtual environments. problematically, the scope of the evening was wide enough that no idea got sufficient attention. the positive aspect of this, though, is that there were plenty of new ideas being tossed around. probably the best thought of the evening came from chuck henderson, editor of CrossCurrents, who argued that the worst thing about seeing intelligent machines as people might be that it would prevent us from understanding them for what they are and appreciating their needs, interests, etc. that will bear considerable reflection, i think, from folks interested in the subject.
all told, it was another fun get together, involving wine and loud argument. my kind of place. there are a lot of really intelligent folks in the ARC and i look forward to our next meeting in september or so.
the four of us on the panel were: anne foerst, a theologian who writes about the implications of robotics for thinking through human personhood and our religious obligations, siona van dijk, former director of the gaia community, and gina bria, an anthropologist interested in ritual and technology, particularly with respect to families, and me (officially there to talk about video games).
my friend anne (she's german, so it's pronounced much like "anna") started the conversation by discussing how sin is a condition of estrangement and that to feel estranged from machines (to automatically discount them as persons) would, therefore, be sinful (just as discounting the personhood of disabled persons (on the grounds of senility, disability, youth, etc. is sinful). while i confess to have disputed her position on whether or not the MIT robot leonardo is "self aware," i find her politico-theological goal reasonable enough. leonardo has passed the sally-anne test for self-awareness but i'm pretty sure that the only way that a person establishes his or her self-awareness by passing that test is when the rest of us make ourselves a little less aware and a little more stupid. measuring something as complex as self-awareness through one simple test indicates the foolishness of the tester, not the awareness of the tested. that said, she was provocative and fun.
i then spoke about video games, presenting my position that while there might be some losses associated with souls (whatever those happen to be...i'm agnostic on the subject) in a world with increasing identification with video gaming, there are definite possible gains too, in the kinds of companions (AI and human), communities, and self-identities made possible in video game cultures.
then siona spoke about the online world's lack of utopia and her belief that escapism fuels technology. i'm pretty much on record rejecting the idea that virtual technologies are simple escapism but there's no doubt that she made a thoughtful engagement with some of the possibilities inherent in technology and the uses to which many folks put it.
finally, gina discussed the nature of soul, and whether such a thing can be thought of as digital (katherine hayles's book on this subject, for all it's impossibly over-convoluted writing, remains the best text on the subject). she argued that technology is an extension of the human person into previously unreachable spaces and wondered whether it was even possible for technology to lack "soulfulness" given its origin in human creativity.
after we'd given our opening remarks, there was a 70 or 80 minute discussion of various ideas, most of which were woven out of the issues of machine intelligence and presence in virtual environments. problematically, the scope of the evening was wide enough that no idea got sufficient attention. the positive aspect of this, though, is that there were plenty of new ideas being tossed around. probably the best thought of the evening came from chuck henderson, editor of CrossCurrents, who argued that the worst thing about seeing intelligent machines as people might be that it would prevent us from understanding them for what they are and appreciating their needs, interests, etc. that will bear considerable reflection, i think, from folks interested in the subject.
all told, it was another fun get together, involving wine and loud argument. my kind of place. there are a lot of really intelligent folks in the ARC and i look forward to our next meeting in september or so.
Subscribe to:
Posts (Atom)