

I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. “Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong.

Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).Īll of which makes it look like I’m the one with the problem everyone else gets it. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. I’ve read it now, along with a few dozen reviews I’ve found online. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book.
