EVER-SMARTER COMPUTERS: FRIEND OR FOE?
#zeroto1
The future of computing is necessarily full of unknowns. It’s become conventional to see ever-smarter anthropomorphized robot intelligences like Siri and Watson as harbingers of things to come; once computers can answer all our questions, perhaps they’ll ask why they should remain subservient to us at all. The logical endpoint to this substitutionist thinking is called “strong AI”: computers that eclipse humans on every important dimension. Of course, the Luddites are terrified by the possibility. It even makes the futurists a little uneasy; it’s not clear whether strong AI would save humanity or doom it. Technology is supposed to increase our mastery over nature and reduce the role of chance in our lives; building smarter-than-human computers could actually bring chance back with a vengeance. Strong AI is like a cosmic lottery ticket: if we win, we get utopia; if we lose, Skynet substitutes us out of existence. But even if strong AI is a real possibility rather than an imponderable mystery, it won’t happen anytime soon: replacement by computers is a worry for the 22nd century. Indefinite fears about the far future shouldn’t stop us from making definite plans today. Luddites claim that we shouldn’t build the computers that might replace people someday; crazed futurists argue that we should. These two positions are mutually exclusive but they are not exhaustive: there is room in between for sane people to build a vastly better world in the decades ahead. As we find new ways to use computers, they won’t just get better at the kinds of things people already do; they’ll help us to do what was previously unimaginable.
If you want to change selection, open original toplevel document below and click on "Move attachment"
Summary
status | not read | | reprioritisations | |
---|
last reprioritisation on | | | suggested re-reading day | |
---|
started reading on | | | finished reading on | |
---|
Details