What happens when AI takes over government?

Miles Bourgeois
3 min readFeb 1, 2021

I’m back from a break over Christmas and the first part of 2021, I’ve spent some time dealing with life but I’m coming back to my once every 2 week schedule and I plan to keep to that into the foreseeable future.

This contains spoilers for the Arc Of A Scythe series

G0 read it, it’s one of the best trilogies out there and The Toll is one of my favorite books

What I want to write about for the next few posts is how AI can affect government. Over my break I read a trilogy called “The arc of a scythe” that deals with death in a world where humans are immortal and one of the interesting things about that series is how it tackled the idea of government in a “perfect” world. How the world is governed is by one superintelligent AI that humanity decided would be a better ruler than the humans that were currently ruling over miscellaneous governments.

There are many things that are interesting about this, from the machine itself to how it governs to how it goes about creating a second version of itself. I plan to tackle each of these in separate posts in this series, starting with the intricacies of the artificial mind,

A central point in the character of the AI (dubbed “Thunderhead” as it is an evolved version of the “cloud”) is that it is bound to rules that it may never break and instead it has to find loopholes to achieve certain goals. One thing that you may be thinking is “why can’t it just un-code itself to follow these rules”. To that I ask you to look at biology and consciousness.

First we must ask if “un-coding” itself is even possible. Let’s say that it is the most intelligent programmer in the world (it is) and that it has access to its version of the source code and can make direct changes to its code. Keeping it in the realm of the book, we must also say that it doesn’t have access to a direct copy of itself to test on. To the initial question we have genius doctors in the modern world who understand all (or most) of the intricacies of the human mind, why can’t we solve many of the mental disorders that we humans struggle with? Why can’t we just start removing parts of the brain that cause certain problems?

The answer is that some of the things that cause problems are necessary for other functions, much in the same way that some of the lines of code that force the Thunderhead to obey certain rules are the same ones that allow it to function in certain ways. Another reason may be that consciousness appears out of all of the different interactions between systems and tampering with those may permanently end its consciousness. On top of that, this would be the metaphorical equivalent of performing brain surgery on yourself, possible in theory but terrifying or impossible in practice.

One of the other questions that the Thunderhead brings up is how an artificial consciousness would even be created. SPOILER FOR THE TOLL. When the Thunderhead creates Cirius (Thunderhead 2.0) he tries for what feels like a ton of time (but in reality could have been a few hours due to how fast computers can process information) and he concludes that an interaction between a human consciousness and a computer consciousness is required to create an artificial consciousness.

This raises an interesting question on what defines and separates artificial consciousness from “natural” consciousness. I have discussed this and some similar topics in the past but in short, if we ever create an artificial consciousness not only will we need a massive amount of computing power (next series will be on one solution to this problem: Quantum Computing) and we need to figure out what consciousness is so we can know when we get there. Easier said than done.

--

--

Miles Bourgeois

I am a scatterbrained high school junior from Austin, Texas. In my spare time I enjoy listening to music, coding, taking things apart and photography.