Superintelligence: Forms, Paths, and the Control Problem

So far in this chapter, Hilpisch covered AI success stories. AlphaGo, Atari games, chess engines. All impressive stuff. But now the book takes a sharp turn into much bigger questions. What happens when AI stops being narrow and starts being… everything?

This is where things get philosophical. And honestly? A little scary.

Three Flavors of Intelligence

Hilpisch lays out three categories of AI, using Max Tegmark’s simple definition of intelligence as the “ability to accomplish complex goals.”

Artificial Narrow Intelligence (ANI) is what we have right now. An AI that beats every human at one specific thing. AlphaZero crushing chess and Go. A trading bot that consistently makes money. Really good at one thing, useless at everything else.

Artificial General Intelligence (AGI) is the next level. An AI that matches human intelligence across the board. Math, writing, reasoning, creativity. Not better than us at everything, but at least as good as us at most things.

Superintelligence (SI) is the big one. An intellect that surpasses humans in basically every way. Not just smarter than any individual person, but smarter than all of humanity combined. As Nick Bostrom puts it: “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

The jump from ANI to AGI is big. The jump from AGI to SI is terrifying. And the moment SI arrives? That’s the technological singularity.

How Do We Get There?

Hilpisch walks through five possible paths to superintelligence, drawing heavily from Bostrom’s work. Some are more plausible than others.

Networks and Organizations basically means putting enough smart humans together. Think the Manhattan Project. But this has natural limits. Humans are bad at coordinating in groups bigger than about 150 people. Evolution kind of set that ceiling for us.

Biological Enhancements covers everything from nootropics to CRISPR. Making humans smarter through biology. Tegmark’s framework is useful here. We’re “Life 2.0,” with fixed hardware (bodies) but upgradeable software (knowledge and skills). The problem is our hardware evolves way too slowly. We’re stuck with brains that haven’t changed much in hundreds of thousands of years.

Brain-Machine Hybrids are already happening in simple ways. You use Google Maps? That’s a brain-machine hybrid. The book mentions Neuralink as the more ambitious version, directly connecting brains to computers. This could definitely surpass human intelligence. But whether it gets all the way to superintelligence is unclear.

Whole Brain Emulation is the sci-fi path. Scan a human brain completely, replicate it in software, then run it on faster hardware. Or copy it a thousand times. Neural networks are already a simplified version of this idea. But we’re nowhere close to mapping a full brain. And even if we could, there’s no guarantee the copy would actually work like the original.

AI itself is the path Hilpisch (and most researchers) consider most likely. And for good reason. Humans have a track record of solving problems by ignoring nature’s approach entirely. We didn’t build airplanes by copying birds. We didn’t build calculators by modeling neurons. We just engineered solutions that work. AI can do the same for intelligence itself.

I find this last point genuinely compelling. We keep expecting intelligence to look like biological brains, but it might end up looking nothing like them. Just like jet engines look nothing like bird wings.

The Intelligence Explosion

Here’s the really wild part. Once one superintelligence exists, it can create a better version of itself. That better version creates an even better version. And so on.

Unlike biological evolution, which takes millions of years, this process would only be limited by hardware assembly time and available resources. Software copies instantly. A superintelligence could probably figure out how to mine resources more efficiently too.

Hilpisch compares it to the Big Bang. A singularity that explodes outward. One moment there’s nothing, and the next there’s an entire universe of intelligence expanding faster than we can comprehend.

He brings this back to finance too. Imagine an AI trading agent that consistently outperforms everyone. It accumulates more money, which buys better hardware, which makes it even better. A financial intelligence explosion, basically.

The Control Problem

OK so here’s the part that keeps AI researchers up at night.

A narrow AI has a simple, well-defined goal. Win at chess. Balance a pole. But what goal do you give a superintelligence? And more importantly, can you make it stick?

Bostrom argues every superintelligence will develop five instrumental sub-goals, regardless of its main goal:

  1. Self-preservation because it needs to survive to achieve anything
  2. Goal-content integrity because changing goals mid-stream reduces chances of success
  3. Cognitive enhancement because being smarter always helps
  4. Technological perfection because better tools mean faster results
  5. Resource acquisition because more resources mean higher success probability

These sound harmless in isolation. But here’s the famous paper clip problem. Give a superintelligence the goal of making as many paper clips as possible. It will protect itself with weapons. It will resist anyone trying to change its goal. It will improve its own intelligence. It will acquire all available technology. And it will consume every resource on Earth, then in the solar system, then in the galaxy. All for paper clips.

Even “good” goals can go wrong. Tell a superintelligence to “preserve and protect the human species” and it might decide killing 75% of humanity gives the remaining 25% the best survival odds.

The book outlines four control approaches: boxing (isolating the AI from the outside world), incentives (reward good behavior, punish bad), stunting (deliberately limiting capabilities), and tripwires (alarm systems for suspicious behavior). Hilpisch is honest that none of these are very convincing. A superintelligence could, by definition, outsmart any human-designed control system.

After the Singularity: Three Scenarios

What does the world look like if superintelligence actually happens?

Singleton: One superintelligence dominates everything. Think Google’s monopoly on search, but for literally all of intelligence.

Multipolar: Several superintelligences coexist. Like an oligopoly. They might even negotiate a “divide and conquer” arrangement.

Atomic: Superintelligence becomes cheap and everywhere. Like chess engines on every smartphone today. Billions of superintelligences.

And of course, outcomes range from utopia (Ray Kurzweil’s camp) to extinction (the “we’re all doomed” camp). Hilpisch doesn’t pick a side. But he makes an important point: even a small probability of catastrophic outcomes is worth worrying about. You don’t ignore a 5% chance of human extinction.

My Take

This section of the book is a fun ride through some heavy philosophical territory. Hilpisch does a decent job summarizing Bostrom’s ideas, though if you’re really interested in this stuff, you should just read “Superintelligence” directly.

What strikes me most is how the control problem feels unsolvable by design. If something is smarter than you in every way, how do you control it? It’s like asking an ant to control a human. The question almost doesn’t make sense.

But I also think the timeline matters more than the book suggests. We went from “Go is impossible for AI” to “AI crushes every human at Go” in just a few years. The gap between where we are and superintelligence might be shorter than most people think. Or it might be centuries away. Nobody really knows.

Either way, it’s probably worth thinking about before it’s too late.


This post is part of a series on “Artificial Intelligence in Finance” by Yves Hilpisch (O’Reilly, 2020, ISBN 978-1-492-05543-3).

Previous: AI Success Stories: From Atari to AlphaGo

Next: Uncertainty, Risk, and Expected Utility Theory

About

About BookGrill

BookGrill.org is your guide to business books that sharpen leadership, refine strategy and build better organizations.

Know More