Uncontrollable [Book Review]
The book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World by Darren McKee couldn’t be more perfectly timed right now. Currently the EU is in the last steps to ratify the EU AI Act, a landmark legislation on what and how you can work with AI going forward. In addition we see ever better Generative AI tools coming out, almost every day a new model, technique, paper or idea is published that advances the field at an unprecedented pace.
So when I heard about this book in a podcast, don’t remeber which one sadly, I had to pick up a copy for myself and read it. The book itself is split into three parts:
What is Happening? - A general and high level overview of the current AI landscape. It also sets definitions that the rest of the book works of, specifically defining what Artifical Intelligence (AI), Artifical General Intelligence (AGI) and Artifical Superintelligence (ASI) are
What are the Problems? - This part takes up the majority of the book, going into detail of different potential existential risk that could arise with AI and more specifically ASI
What can we Do? - The final part is the sortest of them all and talks about potential ways of mitigating some of the risks
Just by the weight of space given to the content my impression is that the author is very alarmed when it comes to the potential risks of ASI. While I really liked some of the examples and explanations given in the book, some where also weirdly presented. A great example are the three laws of robotics by Isaac Asimov, they are framed in a way as if they are considered to be good rules, when they were literally created as a literary device to tell stories, to be broken, to be played with. The issue, while the author talks about ways to break them, it is never mentioned what alternatives could be (also not in later chapters).
Another issue I have with the book is that most of the cited sources are news articles. Of course this is related to the extreme “currentness” of the topic, but still there is a lot of research around alignment out there.
From the solutions presented at the end I was also very disappointed, one thing is that the author seemingly advocates for closed sources projects and heavy regulations. Both approaches I personally disagree with as when these models are purley black boxes we will never understand what is going on. Looking at the space while the leading models are closed sources, a lot of the advances we see around these models come from open source contributions, discussions and ideas.
To set this into context, I am a techno-optimist, I strongly believe that with ongoing and open discussions we have the power to use this technology for a better future for all. And while beating the existential risk drums is important, it feels a bit self serving and bleak to me. The number one thing everybody can do is become interested in the topic, start to form an opinion on it and take part of the public discourse. As the author points out this isn’t a settled debate and the author also repeatedly points towards nuclear weapons and I believe this is a perfect example. Nuclear weapons represent an existential risk, as humanity we have been on this so called tight rope for almost 80 years now, and while we have struggled from time to time the tight rope feels like it has become more of a balancing beam.
At the end I would call this the most interesting problem we currently face as humanity and as the author likes to say: “Unsolved does not mean unsolvable” - with that takaway I can recommend this book, even though it feels a bit alarmist and anti open source to me.