I sat in a San Francisco conference room a few months ago as 14 staffers at the charity recommendation group Give Well discussed the ways in which artificial intelligence — extreme, world-transforming, human-level artificial intelligence — could destroy the world. Not just as idle chatter, mind you. They were trying to work out whether it’s worthwhile to direct money — lots of it — toward preventing AI from destroying us all, money that otherwise could go to fighting poverty in sub-Saharan

Africa.

“Say you tell the AI to make as many paper clips as it can possibly make,” Howie Lempel, a program officer at GiveWell, proposed, borrowing a thought experiment from Oxford professor Nick Bostrom.

The super AI isn’t necessarily going to be moral. Even with positive goals, it could backfire. It could see the whole world as a resource to be exploited for making paper clips, for example.

“Just because it’s very intelligent doesn’t mean it has reasonable values,” Lempel said. “Maybe it starts turning puppies into paper clips.”

Read More…

7 Billion: How Did We Get So Big So Fast?

Read More…