Sunday, October 4, 2009

David Chalmers

David Chalmers, the consciousness guy, spoke of how important simulation is in ensuring that strong AI is possible & controllable.  He used a very formal logical argument to show that 1) we can create AI with intelligence equal to our own, 2) AI can create AI+ (AI with greater than human intelligence, and 3) AI+ can create AI++ (super intelligence).  It was all very plausible except for the assumption that AI+ or AI++ is even possible to construct, using any technique at all.  Maybe human-level intelligence is all that’s possible.  Chalmers argues that we should isolate our AI experiments in closed simulated worlds, and avoid talking to them or giving them information.  This will keep them from learning too much about us and taking over our world.  This is the most concrete proposal I’ve heard for having a safe, beneficial intelligence explosion. 

In general, Chalmers thinks the Singularity will take centuries to arrive, rather than decades.  He also thinks it will be more productive to create super intelligence through a “dumb” path like simulated evolution.  This makes sense to me as the easiest way to create strong AI, and answers critics who say designing AI or AI+ is inherently beyond our capabilities. 

Ultimately, the Singularity and intelligence explosion will happen.  We can choose to upload ourselves into the “simulated” world of AI++ and then enhance ourselves to try to keep up.  Chalmers argues that gradual uploading will work and even preserve our sense of self.  It’s worth a shot, since the alternative is extinction.

No comments:

Post a Comment