The Technological Singularity (by Murray Shanahan)
Murray Shanahan is a cognitive roboticist at Imperial College London and his book is meant to be a primer on the “intelligence explosion” or the Singularity. He not only talks about how we could create human-level AI and superhumanly intelligent AI but also sketches the philosophical problems that creating these systems would pose.
Ultimately, I found his book quite confusing. His train of thought seemed to stop at some junctions and then leave from entirely different ones, though I do give him credit for trying very hard to signpost. It’s also surprising that Shanahan quite crisply describes Chalmers’ views and other philosophical positions, but fails to apply any of those basic philosophical skills of argumentation (premises, inference, conclusion) to his stance, which often felt like a hodgepodge of competing claims with little flow or rigor to them.
For instance, he never quite clarifies what he means by “superintelligence” but seems to endorse all of Bostrom’s conceptions of it. He mentions “creativity” and “common sense” as central to intelligence, but then one might ask whether those are attributes or it or define it. I wish he had co-written this book with a philosopher, as that would have presumably added rigor to his stance.