Is Transformative Artificial Intelligence Just Around the Corner?

By James Pethokoukis

March 5, 2025

James Pethokoukis is a senior fellow and the DeWitt Wallace Chair at the American Enterprise Institute, where he analyzes the US economy. He is the author of the book, The Conservative Futurist: How to Create the Sci-Fi World We Were Promised, and writes the popular Substack Faster, Please!

Guessing when radically transformative artificial intelligence (TAI) will arrive has become something of a parlor game in Silicon Valley, with predictions ranging from imminent revolution to distant dream. Yet it’s important to identify, if possible, any signs suggesting that TAI is imminent. Such signals could encourage the government to prepare by bolstering welfare systems, retooling education to prize uniquely human skills, crafting sensible guardrails for AI deployment, and—crucially—smoothing the path for workers whose livelihoods may be upended. These efforts, though fiendishly difficult given the inherently murky timeline, might spell the difference between a jarring disruption and a more orderly transition as machines grow ever smarter.

But what are we talking about, exactly? What is meant by “transformative artificial intelligence?”

You will recognize its arrival by its socioeconomic fruits, not any technical specifications or computational benchmark achievements. Just as the steam engine’s worth was measured not, ultimately, in horsepower but in how it revolutionized manufacturing, or how electricity’s value emerged not from voltage but from its illumination of modern life, transformative AI will announce itself through its pervasive effects on society.

What might the world of TAI look like? The bosses of leading AI companies offer intriguing visions.

Historical parallels are instructive. When electrification became widespread in retooled American factories in the early 20th century, few questioned whether it marked a genuine revolution. The evidence was written in soaring productivity figures and reorganized factory floors. The internal combustion engine’s transformative nature revealed itself not in mechanical specifications but in the reshaping of cities and the death of distance. Synthetic materials like plastics demonstrated their importance not through chemical formulas but by infiltrating every corner of modern life. So it will be with TAI. The impacts across all aspects of life will be so profound as to make unnecessary any debate about the significance of this technological advance.

What might the world of TAI look like?

The bosses of leading AI companies offer intriguing visions. In a February blog post, OpenAI CEO Sam Altman sees “the beginning of something for which it’s hard not to say ‘this time it’s different,’” predicting extraordinary economic growth and human advancement. He envisions a near future where “anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025.” While acknowledging massive changes ahead, Altman emphasizes that “life will go on mostly the same in the short run”—we’ll still fall in love, create families, and hike in nature. The key focus, as he sees it, is on the broad distribution of TAI’s benefits, with particular attention to preventing stark imbalances between the people who own the brilliant machines and the workers who don’t. The ultimate goal is to give everyone “access to unlimited genius to direct however they can imagine.”

Anthropic CEO Dario Amodei sketches an even more expansive scenario in his late 2024 essay, “Machines of Loving Grace.” He envisions what he calls a “compressed 21st century,” where super intelligent systems deliver a century’s worth of scientific and social progress in under a decade. The transformation would begin in medicine, eliminating most cancers and mental illness while doubling human lifespans. Economic development would accelerate dramatically, with TAI potentially driving 20 percent annual gross domestic product (GDP) growth in poor countries. Even thornier challenges like strengthening democratic institutions could yield to AI’s capabilities through “uncensored” information flows and improved government services.

Utopian? Perhaps. But Amodei argues this represents not sci-fi fantasy but the natural culmination of existing trends. He sees the trajectory as “overdetermined,” in that multiple paths of human progress—from disease eradication to economic development to democratic advancement—all point toward similar outcomes, which AI would simply accelerate by building “a country of geniuses in a datacenter.”

The great leap forward in AI that both CEOs are talking about is often called artificial general intelligence (AGI). AGI typically refers to an AI system that can match or exceed human-level performance across most intellectual tasks and fields, working autonomously like a human would. But the term “artificial general intelligence” isn’t without controversy. Critics object that it suggests a binary state—either an AI system is “general” or it isn’t—when intelligence actually exists on a spectrum. Today’s AI systems already exceed human performance in specific domains while lacking capabilities in others, making the boundary between “narrow” and “general” AI increasingly blurry. Thinking about a clear boundary that marks when human-level AI begins also carries problematic cultural baggage thanks to science fiction, which often evokes unrealistic (so far, at least) images of sentient robots or sudden breakthroughs when AI “becomes aware” and decides to eliminate its carbon-based creators.

Anthropic’s Amodei prefers less evocative terms, like “powerful AI,” that avoid these connotations. But other technologists like OpenAI’s Altman are more comfortable with the term. While acknowledging AGI as “weakly defined,” Altman uses it to mean systems that can “tackle increasingly complex problems, at human level, in many fields.” He frames AGI as “just another tool in this ever-taller scaffolding” we’re building.

And there are lots of ways of determining if that tool, in fact, now exists—whatever term you want to use to describe it. The Metaculus prediction platform captures the notion that AGI isn’t any one thing by creating separate metrics for weak and strong AGI. For the former, the four basic criteria are passing the Loebner Silver Prize’s Turing test (fooling judges through five minutes of unrestricted conversation), achieving 90 percent accuracy on Winograd Schema challenges (resolving ambiguous sentences), scoring in the 75th percentile on SAT mathematics, and efficiently conquering the notoriously difficult Atari video game Montezuma’s Revenge. Crucially, this must emerge from an integrated intelligence tool rather than a collection of narrow specialist AI systems. At the moment, the consensus community forecast is weak AGI will arrive by late October 2026.

The bar is set even higher for strong AGI and includes robotics: mastering a two-hour adversarial Turing test (in which judges actively attempt to unmask the AI through text, images, and audio), demonstrating physical dexterity (by assembling intricate scale models from written instructions), exhibiting broad expertise (achieving 75 percent minimum and 90 percent mean accuracy across specialized knowledge domains), and conquering interview-level programming challenges with 90 percent accuracy. Gone are those simpler benchmarks of earlier frameworks. The current Metaculus forecast: early May 2030.

But AGI may prove just a way station on the path to something far more profound. The next frontier, according to some researchers, is ASI: artificial superintelligence. Such systems would not merely equal human cognitive abilities but vastly exceed them, much as Homo sapiens’ mental capabilities outstrip those of its simian cousins. The gap between AGI and ASI—although both qualify as transformative AI—would be more chasm than step. These hypothetical systems would process information at breathtaking speeds, spot patterns invisible to human minds, and learn at rates that would make today’s machine learning look positively glacial. Most intriguing, they might achieve what computer scientists call recursive self-improvement—essentially, the ability to upgrade their own intelligence.

And this brings us to what futurists have dubbed the Technological Singularity. The term, borrowed from physics, is apt: Just as the laws of physics break down at the event horizon of a black hole, human predictive powers fail at the threshold where machines begin rapidly bootstrapping themselves to ever-greater intelligence. Beyond this point, tech progress would follow trajectories that human minds, with their merely human capabilities, might find impossible to fathom.

For now, at least, both bond markets and economic data offer a useful corrective to breathless predictions of imminent machine supremacy.

How do economists, rather than science fiction writers, think about the Singularity?

A new economics paper, “Transformative AI, Existential Risk, and Real Interest Rates,” defines TAI as AI systems that generate an impact comparable to the Industrial or Agricultural Revolutions. This classification embraces two divergent scenarios: “aligned” systems triggering unprecedented economic acceleration—global GDP growth surpassing 30 percent annually—or their unaligned counterparts causing human extinction. In either scenario, researchers Trevor Chow and Basil Halperin of Stanford, along with J. Zachary Mazlish of Oxford and the Global Priorities Institute, suggest watching long-term interest rates for clues. The logic is compelling: Whether AI proves beneficial or dangerous, real interest rates would rise. If markets expect AI to create unprecedented abundance, future consumption becomes less valuable; if they fear existential risks, future consumption holds little worth. Either scenario requires higher rates to incentivize lending.

Using inflation-linked bonds and professional forecasts across 89 countries, the authors demonstrate that higher real rates indeed predict stronger future growth, while equity valuations send mixed signals about AI’s impact. A sustained, otherwise unexplained rise in long-term real rates—something we are not currently seeing, by the way—could signal that collective market wisdom anticipates a profound technological transformation, whether utopian or dystopian.

Another forecasting approach is offered by William Nordhaus, a Yale economist and Nobel laureate. In “Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth,” he offers a clever framework that examines economic data to determine whether machines are indeed usurping human workers in big numbers (they aren’t, much), if productivity growth is accelerating (no strong sign yet), whether capital is devouring an ever-larger share of output (somewhat), if capital intensity is rising unusually quickly (nope), whether information technology capital is becoming dominant (gradually), and if measurement problems hide accelerating progress (mixed evidence). The verdict? Of his six tests, only two offer even modest support for the Singularity happening soon.

For now, at least, both bond markets and economic data offer a useful corrective to breathless predictions of imminent machine supremacy. But a caveat: It can often seem darkest before the technological dawn. The final decade of the last millennium provides an instructive example. In the early 1990s, America’s mood was decidedly pessimistic, despite the end of the Cold War in America’s favor. Yet precisely at this moment of deep skepticism, the Third Industrial Revolution—driven by computing power, personal computers, and the internet—was already beginning, soon delivering a doubling of productivity growth and economic expansion 50 percent faster than the postwar average.

Similarly today, beneath current economic uncertainties may lie the foundations of another technological acceleration. Whether it is a revolution or merely evolution is still a mystery—though there may soon be clues.