After dozens of posts on theology, marriage, and politics, why not something about computers, an area in which I have actual expertise? Much of the work I have done in my day job has been building complex software applications, and on the side I have worked on projects which have required what is sometimes called “artificial intelligence”. I use scare quotes for good reason, but I’ll use the shorthand throughout this article.

AI has been in the news recently because some fairly sophisticated AI can “draw” images and “write” text based on prompts. It’s scaring a good number of people who think that there must be some singularity on its way soon, where machines will surpass human beings. In my early years working on computers I admit I had this concern as well, but as my faith, my understanding of the world, and especially my understanding of software and computers has increased, this concern has all but vanished. This post is to help explain why, by analogy, AI will never actually replace a human mind.

The AI image and text generators are statistical models weighted with lots of data, where the weights were calculated based on human feedback and the models were created by human engineers.

That’s it. That’s really all of it.

AI is a marketing term, or a description of the use of statistical models. It isn’t actually a form of intelligence. When an AI “draws” a picture, it’s not drawing anything. It’s running a series of algorithms using a large pool of data as a starting point and weights to determine what to display and what not to display. This, incidentally, is precisely not what human artists do when they draw. How do I know? Or, more importantly, how do I know you can’t create an artificially mind out of a computer? An analogy will explain it.

Every computer on earth – every single one – is ultimately a series of AND, OR, and NOT gates. Electricity is involved to reset these gates and transmit output between them, but these three logical gates make up everything. Each gate takes inputs (either high (1) or low (0)) and outputs something based on the type of gate. For example, an AND gate takes in two inputs and, if both are high (1), it outputs high (1), otherwise low (0). A NOT gate reverses the polarity.

So what?

Each of these gates in a computer is made of different types and arrangements of metal, but there’s no reason you couldn’t make these gates out of something else. Say, dominoes. People have designed and built, fairly easily, all three types of these gates.

If you chain these gates together, you can get all kinds of secondary gates, including memory (e.g. gates that can “save” a high or low state), all the way up to a modern CPU. You just need a LOT of them.

Suppose you had an unlimited number of dominoes, and a machine that would raise them back up after they fall after a certain amount of time.

You could build any computer out of dominoes. Any computer. Sure, it would be slow and absolutely enormous, but it would produce the same output. Any computer and any software, mind you. This includes sophisticated AI that generates images and text. It includes the most sophisticated AI you could ever build.

The question is: how many dominoes would it take before the entire massive construct of dominoes gains sentience and self-awareness and begins to think for itself?

For the same reason, no computer AI will ever gain sentience or self-awareness or think for itself. This isn’t just unlikely. It’s not possible.

Advertisement