Skip to content ↓

What do we know about the economics of AI?

Nobel laureate Daron Acemoglu has long studied technology-driven growth. Here’s how he’s thinking about AI’s effect on the economy.
Press Inquiries

Press Contact:

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Download

“AI” behind a different kinds of graphs showing growth and decline
Download Image
Caption: What are the key questions to track about AI and the economy?
Credits: Credit: Christine Daniloff, MIT; iStock

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
“AI” behind a different kinds of graphs showing growth and decline
Caption:
What are the key questions to track about AI and the economy?
Credits:
Credit: Christine Daniloff, MIT; iStock

For all the talk about artificial intelligence upending the world, its economic effects remain uncertain. There is massive investment in AI but little clarity about what it will produce.

Examining AI has become a significant part of Nobel-winning economist Daron Acemoglu’s work. An Institute Professor at MIT, Acemoglu has long studied the impact of technology in society, from modeling the large-scale adoption of innovations to conducting empirical studies about the impact of robots on jobs.

In October, Acemoglu also shared the 2024 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel with two collaborators, Simon Johnson PhD ’89 of the MIT Sloan School of Management and James Robinson of the University of Chicago, for research on the relationship between political institutions and economic growth. Their work shows that democracies with robust rights sustain better growth over time than other forms of government do.

Since a lot of growth comes from technological innovation, the way societies use AI is of keen interest to Acemoglu, who has published a variety of papers about the economics of the technology in recent months.

“Where will the new tasks for humans with generative AI come from?” asks Acemoglu. “I don’t think we know those yet, and that’s what the issue is. What are the apps that are really going to change how we do things?”

What are the measurable effects of AI?

Since 1947, U.S. GDP growth has averaged about 3 percent annually, with productivity growth at about 2 percent annually. Some predictions have claimed AI will double growth or at least create a higher growth trajectory than usual. By contrast, in one paper, “The Simple Macroeconomics of AI,” published in the August issue of Economic Policy, Acemoglu estimates that over the next decade, AI will produce a “modest increase” in GDP between 1.1 to 1.6 percent over the next 10 years, with a roughly 0.05 percent annual gain in productivity.

Acemoglu’s assessment is based on recent estimates about how many jobs are affected by AI, including a 2023 study by researchers at OpenAI, OpenResearch, and the University of Pennsylvania, which finds that about 20 percent of U.S. job tasks might be exposed to AI capabilities. A 2024 study by researchers from MIT FutureTech, as well as the Productivity Institute and IBM, finds that about 23 percent of computer vision tasks that can be ultimately automated could be profitably done so within the next 10 years. Still more research suggests the average cost savings from AI is about 27 percent.

When it comes to productivity, “I don’t think we should belittle 0.5 percent in 10 years. That’s better than zero,” Acemoglu says. “But it’s just disappointing relative to the promises that people in the industry and in tech journalism are making.”

To be sure, this is an estimate, and additional AI applications may emerge: As Acemoglu writes in the paper, his calculation does not include the use of AI to predict the shapes of proteins — for which other scholars subsequently shared a Nobel Prize in October.

Other observers have suggested that “reallocations” of workers displaced by AI will create additional growth and productivity, beyond Acemoglu’s estimate, though he does not think this will matter much. “Reallocations, starting from the actual allocation that we have, typically generate only small benefits,” Acemoglu says. “The direct benefits are the big deal.”

He adds: “I tried to write the paper in a very transparent way, saying what is included and what is not included. People can disagree by saying either the things I have excluded are a big deal or the numbers for the things included are too modest, and that’s completely fine.”

Which jobs?

Conducting such estimates can sharpen our intuitions about AI. Plenty of forecasts about AI have described it as revolutionary; other analyses are more circumspect. Acemoglu’s work helps us grasp on what scale we might expect changes.

“Let’s go out to 2030,” Acemoglu says. “How different do you think the U.S. economy is going to be because of AI? You could be a complete AI optimist and think that millions of people would have lost their jobs because of chatbots, or perhaps that some people have become super-productive workers because with AI they can do 10 times as many things as they’ve done before. I don’t think so. I think most companies are going to be doing more or less the same things. A few occupations will be impacted, but we’re still going to have journalists, we’re still going to have financial analysts, we’re still going to have HR employees.”

If that is right, then AI most likely applies to a bounded set of white-collar tasks, where large amounts of computational power can process a lot of inputs faster than humans can.

“It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, et cetera,” Acemoglu adds. “And those are essentially about 5 percent of the economy.”

While Acemoglu and Johnson have sometimes been regarded as skeptics of AI, they view themselves as realists.

“I’m trying not to be bearish,” Acemoglu says. “There are things generative AI can do, and I believe that, genuinely.” However, he adds, “I believe there are ways we could use generative AI better and get bigger gains, but I don’t see them as the focus area of the industry at the moment.”

Machine usefulness, or worker replacement?

When Acemoglu says we could be using AI better, he has something specific in mind.

One of his crucial concerns about AI is whether it will take the form of “machine usefulness,” helping workers gain productivity, or whether it will be aimed at mimicking general intelligence in an effort to replace human jobs. It is the difference between, say, providing new information to a biotechnologist versus replacing a customer service worker with automated call-center technology. So far, he believes, firms have been focused on the latter type of case. 

“My argument is that we currently have the wrong direction for AI,” Acemoglu says. “We’re using it too much for automation and not enough for providing expertise and information to workers.”

Acemoglu and Johnson delve into this issue in depth in their high-profile 2023 book “Power and Progress” (PublicAffairs), which has a straightforward leading question: Technology creates economic growth, but who captures that economic growth? Is it elites, or do workers share in the gains?

As Acemoglu and Johnson make abundantly clear, they favor technological innovations that increase worker productivity while keeping people employed, which should sustain growth better.

But generative AI, in Acemoglu’s view, focuses on mimicking whole people. This yields something he has for years been calling “so-so technology,” applications that perform at best only a little better than humans, but save companies money. Call-center automation is not always more productive than people; it just costs firms less than workers do. AI applications that complement workers seem generally on the back burner of the big tech players.

“I don’t think complementary uses of AI will miraculously appear by themselves unless the industry devotes significant energy and time to them,” Acemoglu says.

What does history suggest about AI?

The fact that technologies are often designed to replace workers is the focus of another recent paper by Acemoglu and Johnson, “Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution — and in the Age of AI,” published in August in Annual Reviews in Economics.

The article addresses current debates over AI, especially claims that even if technology replaces workers, the ensuing growth will almost inevitably benefit society widely over time. England during the Industrial Revolution is sometimes cited as a case in point. But Acemoglu and Johnson contend that spreading the benefits of technology does not happen easily. In 19th-century England, they assert, it occurred only after decades of social struggle and worker action.

“Wages are unlikely to rise when workers cannot push for their share of productivity growth,” Acemoglu and Johnson write in the paper. “Today, artificial intelligence may boost average productivity, but it also may replace many workers while degrading job quality for those who remain employed. … The impact of automation on workers today is more complex than an automatic linkage from higher productivity to better wages.”

The paper’s title refers to the social historian E.P Thompson and economist David Ricardo; the latter is often regarded as the discipline’s second-most influential thinker ever, after Adam Smith. Acemoglu and Johnson assert that Ricardo’s views went through their own evolution on this subject.

“David Ricardo made both his academic work and his political career by arguing that machinery was going to create this amazing set of productivity improvements, and it would be beneficial for society,” Acemoglu says. “And then at some point, he changed his mind, which shows he could be really open-minded. And he started writing about how if machinery replaced labor and didn’t do anything else, it would be bad for workers.”

This intellectual evolution, Acemoglu and Johnson contend, is telling us something meaningful today: There are not forces that inexorably guarantee broad-based benefits from technology, and we should follow the evidence about AI’s impact, one way or another.

What’s the best speed for innovation?

If technology helps generate economic growth, then fast-paced innovation might seem ideal, by delivering growth more quickly. But in another paper, “Regulating Transformative Technologies,” from the September issue of American Economic Review: Insights, Acemoglu and MIT doctoral student Todd Lensman suggest an alternative outlook. If some technologies contain both benefits and drawbacks, it is best to adopt them at a more measured tempo, while those problems are being mitigated.

“If social damages are large and proportional to the new technology’s productivity, a higher growth rate paradoxically leads to slower optimal adoption,” the authors write in the paper. Their model suggests that, optimally, adoption should happen more slowly at first and then accelerate over time.

“Market fundamentalism and technology fundamentalism might claim you should always go at the maximum speed for technology,” Acemoglu says. “I don’t think there’s any rule like that in economics. More deliberative thinking, especially to avoid harms and pitfalls, can be justified.”

Those harms and pitfalls could include damage to the job market, or the rampant spread of misinformation. Or AI might harm consumers, in areas from online advertising to online gaming. Acemoglu examines these scenarios in another paper, “When Big Data Enables Behavioral Manipulation,” forthcoming in American Economic Review: Insights; it is co-authored with Ali Makhdoumi of Duke University, Azarakhsh Malekian of the University of Toronto, and Asu Ozdaglar of MIT.

“If we are using it as a manipulative tool, or too much for automation and not enough for providing expertise and information to workers, then we would want a course correction,” Acemoglu says.

Certainly others might claim innovation has less of a downside or is unpredictable enough that we should not apply any handbrakes to it. And Acemoglu and Lensman, in the September paper, are simply developing a model of innovation adoption.

That model is a response to a trend of the last decade-plus, in which many technologies are hyped are inevitable and celebrated because of their disruption. By contrast, Acemoglu and Lensman are suggesting we can reasonably judge the tradeoffs involved in particular technologies and aim to spur additional discussion about that.

How can we reach the right speed for AI adoption?

If the idea is to adopt technologies more gradually, how would this occur?

First of all, Acemoglu says, “government regulation has that role.” However, it is not clear what kinds of long-term guidelines for AI might be adopted in the U.S. or around the world.

Secondly, he adds, if the cycle of “hype” around AI diminishes, then the rush to use it “will naturally slow down.” This may well be more likely than regulation, if AI does not produce profits for firms soon.

“The reason why we’re going so fast is the hype from venture capitalists and other investors, because they think we’re going to be closer to artificial general intelligence,” Acemoglu says. “I think that hype is making us invest badly in terms of the technology, and many businesses are being influenced too early, without knowing what to do. We wrote that paper to say, look, the macroeconomics of it will benefit us if we are more deliberative and understanding about what we’re doing with this technology.”

In this sense, Acemoglu emphasizes, hype is a tangible aspect of the economics of AI, since it drives investment in a particular vision of AI, which influences the AI tools we may encounter.

“The faster you go, and the more hype you have, that course correction becomes less likely,” Acemoglu says. “It’s very difficult, if you’re driving 200 miles an hour, to make a 180-degree turn.”

Related Links

Related Topics

Related Articles

More MIT News