Discover more from The Generalist
Life in a Kingdom of Dangerous Magic
Or, why regulating artificial intelligence is so difficult.
Brought to you by Tegus
It can take hours of research to make a smart investment. But what if it didn’t?
Tegus pulls together insights from credible experts and the best financial data, all in one place. With top-notch quality across expert calls, transcripts, financial models, and easy-to-cite SEC data, you’ll get the unique perspectives and deep insight you need to make bold, high-yield investment decisions. And you can do it all in just a few minutes.
Do more, get better, move faster. Thanks to Tegus.
Trial Tegus for free today.
If you only have a few minutes to spare, here’s what investors, operators, and founders should know about regulating artificial intelligence.
Indecent proposals. As artificial intelligence booms, suggestions for how to regulate it have emerged. Earlier this year, a coalition including Elon Musk argued for a hard pause on developing cutting-edge models. Not long after, OpenAI CEO Sam Altman proposed a licensing process that would restrict the number of high-end operational companies. So far, no suggestion appears particularly promising.
Imperfect options. In defense of the safeguards suggested to date, regulating AI is an extraordinary challenge. Though managing any disruptive technology is difficult – a dance between encouraging innovation and minimizing harm – the speed and structure of AI make the task uniquely difficult and consequential.
Under the gun. We may not have much time to devise an appropriate regulatory response to AI’s renaissance. The advances of the past year have compressed timelines radically. We may now be just a breakthrough or two away from ungovernable technologies.
Guru warfare. Not all agree with this worrisome portrayal. For as many experts as there are fretting about AI, an equivalent number seem to view it as an overwhelming good, set to improve our lives beyond measure. Which guru should be listened to? This is a topic that everyone must examine for themselves rather than relying on an information leader.
Iterative governance. No single rule will eliminate the risks of AI. More than specific regulation, what is needed is a new, more flexible form of decision-making. Technology’s radical speed greatly outmatches our current institutions. To have any hope of adapting to a changing landscape, we must find ways to embrace more iterative governance.
The advent of sophisticated artificial intelligence models has granted America’s public and private sectors new power that we are not sure how to control. The rapid advancements released over the last nine months have elicited a range of responses from overwhelming wonder to real fear. It has also invited a few practical suggestions. In March, for example, more than 1,000 technological luminaries, including Elon Musk and Apple co-founder Steve Wozniak, argued for a hard pause on bleeding-edge AI model development. (Incidentally, Musk set about working on two new AI initiatives – X.AI and TruthGPT – around the same time.) Two months later, OpenAI CEO Sam Altman proposed the creation of a bespoke regulatory agency that would limit high-end AI development to a beknighted few.
A thinking person may reasonably read these suggestions and find them unsatisfying. They are wanting in substance, nuance, common sense, and half a dozen adjectives besides. We know universal innovation does not stop just because we ask it to. We understand that a CEO arguing plainly in favor of their interest makes an unreliable advocate.
Though riddled with idiosyncratic deficiencies, these proposals reveal a general rule: There is no good solution to regulating artificial intelligence. Regulate one aspect of the sector, and power accumulates in others; exert pressure in one direction and it spills out somewhere else. To some degree, this is true of all technology, which requires a balance between innovation and regulation. But AI’s structure, accessibility, and capability make it uniquely dangerous and difficult to control. To understand this dynamic, let us put it in the frame of a fable.
One morning, in a benevolent kingdom, the monarch calls for his chief advisor. He has extraordinary news. The royal alchemists, those strange and lank-haired seers that toil sunless in an abandoned stable, have made a discovery. Through luck, sweat, and error, they have concocted a potent new elixir. The king is rightly pleased. Many generations have discussed the potential for such an elixir, and much effort has been expended in bringing it to life. The advisor knows well that many rival states have tried to alight on its recipe. By arriving first, the kingdom has secured a powerful advantage.
If the king were a different man with firmer patience, the advisor would ask for greater details on what power the elixir gives to those who consume it. Does it provide the strength of ten oxen? The speed of the sleek, circling swifts? The wisdom of one’s entire ancestral line? The advisor does not ask. He is left only with the king’s brief, four-part depiction of the elixir’s unusual properties:
Every time the elixir is made, it is easier for it to be made again. Making the one hundredth batch is much simpler than making the tenth and the one thousandth will be easier still. Today, it requires arcane knowledge, rare materials, and arrays of intestinal apparatuses. Eventually, a moderately learned person with the felicitous recipe will be able to make the elixir in their kitchen.
Every successive elixir is more powerful than the last. Already, the versions of the elixir the alchemists made fresh this morning are radically more potent than those brewed a week hence. None are sure where this self-fortification will culminate. Some wracked and fearful alchemists fear the elixir may someday effervesce into a kind of atmospheric sprite inspired by its own volition. A silly fancy, the king assures his chief.
One man with the elixir is stronger than one thousand without it. This is a potion that grants asymmetrical power. In a war, one would prefer to be on the side of a tiny army blessed with the elixir than a vast one without it. Though rival states may not realize it, the kingdom has become the most powerful entity in the realm. At least until another masters its formulation.
Those with the elixir are capable of extraordinary good and extraordinary evil. The potion is, at its core, power. It is capacity, ability, strength – in the broadest meaning of these terms. To be buoyed by it is to feel as forcible as a god, but one with worryingly human desires and ethics.
There may be other important traits, but such information will suffice for the moment, the king tells his advisor. It is clear from the cautious manner of the monarch’s speech that he considers the elixir’s providence of grave importance and no little secrecy. His subsequent request confirms its significance. By the next morning, the king would like his chief to deliver a series of recommended decrees designed to maximize the elixir’s potential and safeguard against internal and external unrest. If handled correctly, the king believes the elixir may usher in a new age of prosperity. But, it could invite catastrophe if mismanaged.
To ensure the chief advisor gives the matter appropriate focus, the king informs him that should he fail to produce a defensible recommendation by sunrise, he will be executed.
It is an unenviable task. Beneath its burden, the chief advisor returns home, seconding himself in his study. At a small desk, he sits and begins considering how to reckon with the elixir’s dangerous magic.
The first idea that comes to him is a blunt one: a temporary ban. His initial thinking is sound enough: this is a technology on the move – accelerating on multiple dimensions at once. As the king said, it is recursively more powerful and easier to produce. Isn’t it better to pause its development before it becomes even harder to deal with? What is needed most is time, consideration, the marshaling of appropriate resources. A brief moratorium allows for all such things.
For a few moments, the advisor allows himself a sigh of relief. He has solved it, and all before lunchtime. But as he casts his eye out the window of his study toward the mountains that separate this kingdom from the next, his mind finds his argument’s fraying edges.
How far ahead is the kingdom from its rival states? The chief knows that many other rulers have sought to create such an elixir and devoted considerable resources to doing so. Can he be sure that only the royal alchemists possess its power? And if so, for how long?
The advisor knows he does not live in a unipolar realm. Beyond the mountains on one side and past the sea on the other are great and potent dominions with the capacity to match the kingdom’s output and perhaps surpass it. A temporary ban would play precisely into their hands, giving them the time to catch up and build a lead. The same accelerating dynamics that make a moratorium tempting also make it dangerous – granting enemies six months of uncontested development could produce a decisive advantage. Rather than maintain stability and supremacy, such a move could foretell the kingdom’s decline.
The advisor allows himself a naive, happy thought. Could the king convince fellow heads of state to follow his lead? Might the realm’s disparate fiefdoms band together and commit to a universal pause? In such a scenario, the kingdom would buy time without surrendering an advantage. It is a pretty idea, but not a practical one. Even if the vast collection of kings and queens, shahs and czarinas, rajahs and baronesses committed to the dictum on its face, one could hardly trust their sincerity. Each would naturally seek the elevation of their own power and, as such, invest in furthering their elixir-making technologies up to the point of detection. Once again, the kingdom gained little in such a scenario; the winners were those rulers best equipped to concur outwardly and connive in private.
The advisor had reached a dead end; he turned and set out in a new direction. Lunchtime has long since passed now, but he has lost his hunger. He brews a pot of afternoon tea and watches the steam fog his window. As he does so, he plays out his second idea. It was one the king was sure to like.
Rather than ban the technology, the king should tightly control the elixir’s production and use. Though already under his control, the royal alchemists should become a formal extension of the state. Any attempts by alchemists to export their knowledge to external entities – a private company, for example, or, god forbid, a foreign domain – would be ruthlessly punished. The king had said that the elixir granted the power of a deity; who better to wield it than the state itself?
Yes, the advisor knew the king would like this idea very much. But did he like it very much himself? It was the dusk hour. He watched the sun wash the sky in rosy color, then dip behind the earth. His children, with their great and trusting eyes and cheeks warm from bathing, wished him goodnight. He heard their footsteps on the wooden floors above him.
The king would like this idea very much, but the king was not a good man. Or, he was not an only good man. On the eve of his scheduled death, the chief knew this well. Propelled by the elixir’s power, what might he do? For decades, the state had existed in relative stability. In part, that was because its regent could not act with impunity. There were a sufficient number of other powerful entities – wealthy merchants, influential generals, and a confident populace – that meant his worst impulses could not go unchecked. What would happen to that balance if the king held the elixir’s power alone?
The chief thought of his children, sleeping overhead. What was the difference between a king and a tyrant? A matter of degree, he thought. A matter of balance. He imagined life beneath such a regime. One of civilian control, of alchemists and generals moving in lock-step, of innovation sharpening the blade of soldiers who moved upon foreign lands. The elixir was capable of extraordinary good, the king had said. In this light, the advisor could not see it.
He had reached another dead end. He walked to his kitchen and brewed a pot of coffee. Sipping it, he stared into the dark. Night was a fastness about him. In a matter of hours, he would be summoned to the court. And there he would be killed. He searched the sky for the first signs of light, but they had yet to arrive.
The advisor tried a third route. Perhaps this was all a problem of balance. Rather than rejecting the elixir’s power by pausing its production or concentrating it in the monarch’s hands, the king should look to spread it within his state. The royal alchemists should be permitted to start laboratories of their own and sell the elixir to those able to pay for it. Others might be hired by those wealthy merchants who ran the kingdom’s moneyed enterprises, ballooning their coffers. In time, and in keeping with the elixir’s characteristics, even small groups would be able to concoct their own versions of it.
Outside, night remained immovable, but the advisor felt a burnishing in his heart. What a bright and daring world that would be! The advisor considered all the good that might be done under such conditions. In the hands of the people and those companies best equipped to use it, the elixir might manifest a thousand miracles, materializing new riches, distributing resources, uplifting the unlucky, and broadly doing the work one calls progress. Untethered innovation and the free market were the solution.
The advisor sipped the last of his coffee and looked upon the grounds that gathered at the bottom of his cup. Was he being foolhardy? He had been an advisor long enough to know the difference between wanting something to be true and it being so.
He thought of the third point the king had enunciated in describing the elixir. It was, in effect, an increasingly asymmetric tool, giving one person the power of a thousand – or more. With insufficient constraints, how many dark-hearted souls might clasp their hands about the elixir and use it to suit their ends? Indeed, even a single scoundrel could foment incredible disruption, damage, and hardship. Imagine a hundred such people! A thousand! The advisor imagined a life marked by volatility, instability, and asymmetric violence, in which any motivated rogue citizen could upend the lives of many others. And he did not find it farfetched.
The advisor considered a significant tweak. Perhaps the technology was too powerful for the masses to enjoy. Though he appreciated the ingenuity of multitudes – he saw their inventiveness every time he walked through the grand and sprawling bazaar with its panoply of spinning contraptions and pungent emollients – perhaps the risks outweighed the rewards. It might be that the common person could not be trusted, and a world in which everyone possessed godlike gifts was one in which threats lived down every alley and passage. The monarch could be effectively counterbalanced solely by a robust private sector. While making civilian production and usage illegal, the court could allow entrepreneurial alchemists and established moneyed enterprises free rein. These companies could develop pragmatically to meet market demand, maximizing utility and minimizing asymmetric risk.
How should the king ensure that only credible companies got their hands on the elixir? There should be some gauntlet to run – a set of requirements to meet. Perhaps only those firms with sufficient capital and technical expertise to develop the elixir productively should be permitted to operate, for example. Those who met various requirements could then receive permission, the king’s license, to conduct their work.
The advisor knew immediately that this, too, had its faults. He knew well the men and women who had grown rich by building their enterprises. Among them were many good and capable citizens and some that had achieved remarkable things. But there were just as many venal, prideful creatures in their class, deranged by wealth and power yet blind to their effects. The more the advisor thought of them, the less difference he saw between their number and the crooks and vandals among the common populace. They were surer of standing and sheltered by bounty, but that made them no less dangerous and, in fact, a good deal more so. What would this greedy, unsteady class become when stewards of the elixir’s raw power?
The advisor foresaw a kingdom governed by an anointed few. Those industrial barons with the necessary authority would use it to cement their standing and avoid competition from below. Between this gilded class, a great contest would fracture the kingdom into informal fiefdoms. Each mogul would race to make more elixir and increase its potency so that they might build an advantage over their rivals. In the chase, corners would be cut, safeguards neglected. The advisor knew well how the minds of the successful could narrow down to the task at hand, could shrink the matter of life into winning and losing. They would unleash great miracles in the process – he was sure of that. He was sure, too, that they would invite many disasters, most without noticing. In the end, the state would have not one king but dozens, each with the power of a god of the old world.
A thin stripe of orange bled across the horizon. How much time did he have left? The advisor reviewed the ideas he had devised, of which he counted four main strands in total. The king could pause all development of the elixir at the risk of external enemies surpassing the kingdom. Alternatively, he could tightly control production and usage, which might lead to an overbearing, authoritarian royal regime. Or he might allow the elixir to move without constraints, potentially giving ill-intentioned civilians the ability to spawn asymmetrical damage. Finally, the king could consider limiting the elixir to only those wealthy business people capable of devoting serious resources to it. However, that could hasten the emergence of a rapacious cadre acting in their narrow interests and ignoring risks.
Decline, tyranny, anarchy, oligarchy. The advisor spoke those words in his mind, and as he did, he saw the sun break across the land and waited for the summons that could not be far behind.
Here, our story ends because this is where we are. We sit, as the advisor does: running out of time to solve a mortal problem with only imperfect solutions available.
Not all will agree with that bleak assessment. Plenty of intelligent practitioners consider AI fear overblown. Marc Andreessen, for example, sees this technological revolution as a more-or-less unblemished good that can make “everything we care about better.” If this group is right, such hand-wringing is of no importance, a form of masochistic theater. We have our laws, our protections, and with a few minor adjustments, we will embark into this synaptical age with sufferable turbulence. Those who think otherwise are mere bedwetters.
For as many experts that view the AI book with sanguinity, an approximate and equally esteemed number approach it with sincere concern. Legendary researchers like Geoffrey Hinton have voiced their worries with little equivocation, referring to AI as an “existential threat” in the not-too-distant future. The technology’s rapid development is a particular concern. “Look at how it was five years ago and how it is now,” Hinton said of AI in one interview. “Take the difference and propagate it forwards. That’s scary.”
If the optimists view the pessimists as bedwetters, the pessimists see their counterparts as lotus-eaters, naive and credulous.
Time will prove one of these perspectives astute. History is filled with faulty eschatological pronouncements centering around hazy, poorly understood technologies. It is also full of ignored warnings and preventable catastrophes.
Which side should we listen to? There is no simple answer, no authority to defer to. You may pick a guru if you wish, but each side is well-stocked with experts and geniuses. There is no obvious coalition for the sane and confederacy of dunces. Each side can refer to the technology in-depth and explain why it can or cannot do, think, feel, want. Each side can call on historical precedences and philosophical theories. The only way to authentically answer this question is to examine it freshly and make up your own mind.
When considering civilizational risks, it is better to overreact than underreact. Here, we can return to the conundrum of the chief advisor. If you agree that it is important to do something, the question becomes: what something should that be?
In situations with insufficient information to make a good decision, you must minimize the impact of making a bad one. To do so, you must create an environment that simplifies reversals and course corrections in response to new context. This “iterative governance” is antithetical to how our institutions run today. One of the superstories of our era seems to be the mismatch between the speed of technological innovation and the stop-start trudge of regulation. Naturalist E.O Wilson’s description of the “real problem of humanity” as our blend of “Paleolithic emotions, medieval institutions and godlike technology” becomes ever truer.
If we are to have a chance of maximizing AI’s benefits and minimizing the potential for cataclysm, we must create a faster, more flexible method of introducing safeguards. Given the weakness of many institutions, formulating such a body may prove nearly impossible. Still, we must try to create a framework such that our first decisions matter less than how we make them. There is no perfect answer; instead, we seek to give ourselves the ability to act, learn, and improve.
This is the advisor’s recommendation to the king. We will see if it is enough to save him.
The Generalist’s work is provided for informational purposes only and should not be construed as legal, business, investment, or tax advice. You should always do your own research and consult advisors on these subjects. Our work may feature entities in which Generalist Capital, LLC or the author has invested.