Daron Acemoglu
Subject: AI is a Mixed Blessing
Bio: Nobel Prize Winner in Economics, Professor at MIT, and co-author of Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity
Transcript:
Larry Bernstein:
Welcome to What Happens Next. My name is Larry Bernstein. What Happens Next is a podcast which covers economics, politics, and international relations.
Today’s topic is AI is a Mixed Blessing.
Our speaker is Daron Acemoglu who won the Nobel Prize in Economics. Daron is a Professor at MIT and is also the co-author of Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity.
I want to learn about how AI will improve productivity as well as its effect on inequality. We will compare AI’s impact with the industrial revolution to understand better how the choices that we make will alter work and our society.
This podcast was taped at a conference that I hosted in Washington DC. So, you are going to hear questions asked by me as well as by my friends.
Daron, please begin with six minutes of opening remarks.
Daron Acemoglu:
We live at a critical juncture in terms of the choices we make in terms of technology, its capabilities, geopolitical balances, and demographic change. But despite these difficulties, there’s optimism, especially in the United States rooted in that the US has always been an optimistic society about technology.
There are two sets of issues that I want to raise. The first is the social consequences of technology and the second is the economic difficulties of integrating new technology into an already complex economy.
Technology should enable us to do things that we couldn’t do before. But decisions are not made as humanity. They’re made by powerful actors in haphazard fashion that have consequences, some unforeseen, some unintended.
Social implications are more complex. Technology is a tool that creates winners and losers. That’s fine because as the creative destruction in Schumpeter’s work emphasized, you need losers in the process of introducing new things. But sometimes winners could be a small number of privileged people, and the losers could be many. Creative destruction is not proof that everything’s going to work out.
There are examples of technologies that either in the way that they were deployed or by their nature created many more losers and cleaning them up was difficult and that is doubly true when technologies extend beyond the production domain into things like communication, armaments, social relations and so on. AI intersects with all those domains.
There is a real concern about what it will do to the labor market. It could sideline humans from a variety of tasks. We are totally unprepared for its distributional and societal implications. It could create a much less or more competitive landscape. And wild distributional implications will inevitably remake the way that we consume things and organize society.
The important point is about choices and about how technology should be used. When you look at historical examples from early mechanization technologies of the industrial revolution, science, chemistry and physics being incorporated into the production process created a whole menu of possibilities and how you develop that menu is very important. Doubly true when you come to digital technologies and that effect communication. AI makes that quadruple true. So, choices we make are very important.
The future of work will depend on whether we use AI mostly for automating tasks or what I call pro-worker AI, which is about expanding human capabilities, creating new sources of information, skills and expertise to perform new tasks. AI’s effects on communication and how we organize groups of people sharing information, doing science and so on.
Societal implications will be with us in 15 years. You see the implications and some of the choices are made not month-to-month but over a longer time horizon. There is the question of how quickly AI is going to have productivity gains. And every technology takes longer to incorporate into businesses than initially anticipated. And it is more so for AI because it needs to be accompanied by organizational changes integrated into businesses.
The fundamental rapid advances in foundation models or the lowest layer of AI such as the models by Google such as Gemini, OpenAI or Anthropic are going relatively fast. But integrating them into businesses is going to take a while. Building applications on them is difficult, and even when you have those applications firms adopting and using them, the productivity gains and its occupational implications in the five-year horizon may be smaller than many people expect.
Larry Bernstein:
There was an economist who said we see the benefits of technology except in the GDP statistics.
Daron Acemoglu:
Solow said that you see computers everywhere except in the GDP statistics.
Larry Bernstein:
Alan Greenspan made some speeches about how electricity was slow to get involved in the manufacturing process. What seems different about AI versus electricity for mills is that we had to abandon the mills in Massachusetts and build new mills near the new electric power sources and then redevelop all those different machines to take advantage of this phenomenon.
But AI seems more to me to be like the movement away from the Wang computer to the personal computer. Just as a metaphor, when I started as a financial analyst in 1987, I didn’t have a PC and my secretary had a Wang processor, and so I would write my presentation out by hand and then she would type it up. I couldn’t wait for her to leave at five o’clock, so I could make the changes in a much more productive way.
And then a couple of years later I got my own PC, and then that secretary no longer was employed as my assistant.
AI seems to be like a rocket ship compared to that transition from the Wang to the PC because you can ask it a question and it gives you the answer in seconds. And so, I don’t really see why we’re not replacing the mill. We’re contributing knowledge and know-how to solve the problem. Take us through how that transition might be different in contrast to the mill in Massachusetts.
Daron Acemoglu:
Bob Solow made that statement in 1987 about computers, and when he made that, people told him, Bob, you don’t understand this technology, just wait a few years, and we’re still waiting. Except for a four-year period of rapid productivity growth around 2000 attributed to the internet, macroeconomic productivity growth has been very slow in the computer age because computers, despite their much less radical reorganizational needs have been difficult to incorporate.
There may be some similarities between the mill or what Ford had to do to incorporate electrical machinery in the way that we need to incorporate AI.
There’s a huge gap between large and small enterprises in terms of technology adoption, and that’s even more true for digital technologies and AI. Rapid AI adoption, it’s going to come either from large corporations or completely new corporations. If it’s completely new corporations, that’s like the mills because then they must scale up. We have some successful examples of that, like Google, which went from nothing to one of the largest corporations in 25 years. But in general, that’s a slow process. Large corporations are hugely bureaucratic. Incorporating AI and reorganizing everything, including human resources around that, is not that different from what Ford had to do where he had to build completely new factories with a new structure in which machinery was sequenced, and the objectives were reformulated. He had to introduce a clerical workforce, engineers as well as new managers and train workers in very different ways. Things are happening faster, so it may not take as long as electricity, but electricity from its inception to its major adoption took about four decades.
Larry Bernstein:
You highlighted the productivity statistics. We had Robert Gordon speak at our book club in Chicago, and he emphasized that the productivity statistics may have been too high, that the windows screen that keeps out the bugs is an incredibly valuable, but its contribution to GDP is small. But Microsoft’s Excel’s 16th version has this enormous increase in our GDP statistics. And frankly I never figured out what those incremental add-ons were doing.
Take this meeting here. I put it together.
Daron Acemoglu:
Good job!
Larry Bernstein:
Thank you. I just sent an email, and you said, yes. I was able to get food deliveries, and I invited all these people by using Evite. But if I had to pick up the phone, or write a letter, the workload would’ve been too much. Because of technology I could do this conference with relative ease and. Technology made it happen. This success won’t show up in the productivity statistics, but it’s amazing. How should we think about failures of productivity statistics to explain rapid changes in how society is organized?
Daron Acemoglu:
There are two reasons why productivity statistics are wrong in general. One is quality, the other one is new goods, not new inputs, new goods. Both are relevant. They’ve been relevant for the digital age, but they were relevant before then. So, if you look at indoor plumbing, that’s a huge new good that we haven’t completely incorporated into our GDP statistics. If you look at the strip mall phenomenon, it indirectly affected GDP statistics because it affects spending on retail and construction. But the way that it reorganized urban landscape is out of the statistics.
There are these new goods and with AI. My hope for AI is not just pro-workers, it also enables us to introduce new goods and services, but we’re not there yet. Any mismeasurement there is coming.
The second is quality. Now quality is hard, but BLS does an excellent job with manufacturing, obviously the iPhone does much more than a flip phone, and that’s not fully incorporated in the statistics, but the BLS does a very good job. Services, which are now 85% of the economy, that’s much harder to measure quality. Now in the case of we don’t need some labor, that’s not going to cause mismeasurement because previously you would’ve done that with your assistant. Now the assistant is not there, so it’s just you. So, productivity doubled, but if in doing this with AI, you’ve improved quality, you’ve got better speakers or a better match between the audience and the speakers, it’s a big deal that we would measure.
I don’t think anybody can say that we are measuring quality of healthcare in the United States, we spend twice more than Japan, France, Germany, UK, and life expectancy in this country is five years less. If it’s the service quality super high, I don’t know where that is.
Education, I wouldn’t say that our quality is inspiring.
Alex Graham:
I was a banker to the telecom bubble, and I saw tremendous overinvestment and capital expenditures. I’ve been cynical about this and a friend of mine in healthcare said that the best AI engine in healthcare is as good as the top 5% of the graduates from Harvard Medical School. But the second best is as good as the rest of the class. And the second best may be good enough.
A technology CEO that I’m close to said that his own company’s investment in one generation ago Nvidia chips was a major investment for their company. They spent $30 million. That was nine months ago. They’ve ripped it all out to replace it with the next generation. I said, how could you possibly justify that economically? He said, the cost per process of what we do has gone from $15 per function to 50 cents per function with the new technology. So, the technology is a 10x.
Then the software that runs on top of the technology is three times faster and can only be run on the newest chip. It’s Moore’s law on steroids in that it’s a 9-month function, not an 18-month function, and it’s 30 x. So, I’m curious if your book may turn out to be…
Daron Acemoglu:
Completely wrong.
Alex Graham:
It is gloomy based on that anecdote. And I start from the premise that I loved your book.
Daron Acemoglu:
Thank you, Alex.
Alex Graham:
But then as I’ve tested in the field, I keep hearing these anecdotes.
Daron Acemoglu:
We are in the early days, so we cannot rush to conclusions, but the evidence that has emerged over the last year is that in the lab in controlled circumstances where context is clear and models have been trained on similar tasks, they do quite well, better than what people might have expected. Two examples where that success is very clear are coding and radiology. Those are well-defined problems. That’s why they are very much at the forefront where you have a right answer and a wrong answer. They’re not socially ambiguous things, very individual specific like diagnosing an individual’s problems more generally, but they’re still difficult problems.
What we are seeing is very impressive performance, from ChatGPT-4 or 5. But all of the examples where they are implemented in the wild by businesses in their actual delivery of services turn out to be not so good. And the reason for that is this integration problem.
That requires that to be integrated with the rest of the code, including bug fixes, troubleshooting, changing it as you are integrating it with something else. It turns out that once AI does that and humans must do the next step, humans take much longer. Despite the very impressive performance of AI, the throughput is reduced. This may be a transitional problem.
Humans may get better at working with AI. AI may get better at having an output that’s easier to detect and do those additional things by humans.
AI is not good enough to take the whole task of radiology reporting. So, what happens is that AI models make a recommendation and a radiologist or doctor must incorporate that and that next step completely breaks down. Humans don’t understand what AI is saying. They don’t know whether the information that’s coming to them is high or low reliability, and when they combine it with their own information, it does worse than doctors without AI, and it does much worse than AI in lab conditions. Those are problems.
Now, those problems exist with every technology. It’s going to be smoothed out. But there are your other problems, your other observation comes in, if I have physical equipment that will last say 10 years, then I can invest a lot. Now don’t get the full returns because there is that training human adaptation stage but still get reasonable return on investment. But if the models must be turned over every nine months, that adds another difficulty.
We are expecting more from AI than it is currently capable of delivering. But if it’s AI hype, some companies may get the returns, but others won’t. For example, the real danger is that Nvidia is going to get the returns unless there’s a massive meltdown. OpenAI might get the returns, although they have made such outlays that it’s hard to see how they’re going to monetize what they have. But some of the companies that adopted them may not get the returns despite again in the lab where there was impressive performance.
Kieran Claffey:
Where do you see the short-term and medium-term disruption of companies like that? Second question relating to Nvidia, Jensen Huang wants to create the US chip as the global standard in chip technology and therefore all applications will run out of that. And then that will create dominance and traction for years to come.
Do you agree with the US government philosophy of limiting access to chip technology and is the US government making a mistake by not making a US chip the basic global standard?
Daron Acemoglu:
Facebook META is one proven way of monetizing data, which is through digital ad revenues. I mean it’s 95 plus percent of META’S revenues. It’s looking for new sources of revenue but I’m not sure whether it’s going to be successful. Google is also trying to find other sources of revenue, but I don’t think it’s going to be that easy. OpenAI has got $10 billion of revenue. A lot of that is from consumers, but there is a limit to how much you can get from consumers. And if the only way you can monetize AI is by subscriptions, it’s not going to pay for a fraction of the outlays. Now, that could be a completely new model based on digital advertising, but many people are banking on applications that enter in the production process: healthcare, education, manufacturing and so on. Now, even those wouldn’t justify the investments META is making. Absolutely not. They’re just insanely high investments.
It’s gotten into the belief that we are at the cusp of a winning take-all market and you must go all to take that race. Zuckerberg said, so what if we waste $500 billion?
That’s also the mindset that has brought us to the current US-China relations. I do not know whether the Chips Act was right. There are arguments either way. But what I believe is that the overall approach of defining our relationship to China, both in AI and more broadly, a zero-sum based on the view that we are locked into this AI race and whoever wins that AI race is going to have a massive geopolitical, military, and strategic advantages is wrong. I don’t think the AI race is going to be resolved within the next five years, and I don’t think it’s going to be the end of everything.
There are so many other issues on which even though I am extremely opposed to the Chinese regime, we do need to cooperate with the Chinese. And that AI race is making that much harder. So that’s why reframing what we want from AI, what are the capabilities, the timeframe is important, not just for US businesses but for geopolitics as well.
Larry Bernstein:
Do you think that the Smartphone is going to be disrupted?
Daron Acemoglu:
I don’t know. More portable devices will be feasible, but they may not have the same convenience for quite a while as the smartphone, which has emerged as so multifaceted that replacing that with other wearable devices might be slower. My default answer is that things are often somewhat slower, but there have been some very rapid changes as well.
We could see, especially things that have a network element, there is a reason for slowness, your social network matters, but sometimes that could also act like an amplifier. But I would be surprised if we don’t have our phones glued to our hands in the next five years.
Rory MacFarquhar:
The biggest takeaway that I had from your book was that technology is a choice, that it’s not preset.
Daron Acemoglu:
100%. That’s the main message. Thank you.
Rory MacFarquhar:
AI in the current moment is not represented to us as a choice. It’s represented to us as a vicious zero-sum competition among companies between the US and China, and that no individual, no set of people are making choices. They’re just being driven by the iron cage of this competition.
Daron Acemoglu:
That’s also one of the secondary theses in the book that those choices right now are being made in a very narrow way. How are choices made about technology? There are technology producers, then there are the users of technology, especially the corporate sector.
Now at each layer there are four sets of influences. One is what are the ideas, visions, aspirations of the people who are making those choices. Second is the competitive process. Third is pressure from civil society, and fourth is government regulation. Now the third and the fourth are obviously much weaker because neither civil society nor the government can be the driver of technology. At best, they can steer it indirectly. Now, what are the incentives for a monopoly to choose the right technology?
Economic theory is quite clear, not very strong. There are some, if there are two technologies that do the same thing and have the same effects on cashflow and market share, et cetera, I would choose the one that’s better. But once they start differing in terms of their cashflow implications, market implications, there is no guarantee that a monopolist would choose the right ones.
Competition typically is a good source because it takes you out of being locked into these things where I’m doing the thing that is good for my business model, somebody can come up with an alternative business model, et cetera.
We’re not at a very competitive stage in the tech sector right now. Civil society is a very weak force and it doesn’t always bark up the right tree.
Government regulation has been very weak in the United States. Antitrust completely absent. Oversight and government understanding of AI has been very weak. At the end of the day, leaders of a handful of companies are making these choices. And there shouldn’t be any guarantee from economics or from other fields that’s necessarily going to lead to the right decisions. We may end up lucky those decisions may not matter, or they may make the right choices. But there are a lot of reasons for thinking that may be wishful thinking.
Rory MacFarquhar:
Europe has been much more active in regulating AI and has succeeded in regulating AI out of Europe.
Daron Acemoglu:
100%. That’s a very good point.
There are three models: the American model, European model, and the Chinese model. The American model is don’t interfere. Let the big industry leaders do what they want. The Chinese model is super effective, very draconian and very non-democratic, but it’s super effective. AI companies do exactly what the Chinese government wants. And so, if you want proof that AI can be regulated, just look at China. It doesn’t seem to have slowed it down, but the direction of AI for a long time was much more effective because they weren’t into a GI. They wanted to use AI for practical things like surveillance, monitoring, and facial recognition.
They did that amazingly well. There are a couple of academic papers on this that show fast development of technologies they wanted, and they’ve been much more draconian in dealing with social media. But on the other hand, their objectives are screwed up, and they couldn’t work in a democratic society.
Europe has the third model, which is very lofty ideals, but done badly. They have made the European AI industry fall behind. So, my advice to European policymakers is you cannot have effective regulation unless you also have an AI sector. Europe as a regulator, US as a creator, that’s not a viable model, but I think there is possibility for combining aspects of these three models.
Larry Bernstein:
Let’s go back to your radiology example. I had Dr. Ari Ciment from Mount Sinai Hospital in Miami Beach on my podcast to discuss this topic. And there’s a new AI that’s made specifically for physicians. I was in his office for my annual physical and my blood pressure was elevated 130/90.”
Daron Acemoglu:
That’s my lowest ever. So, you’re doing well.
Larry Bernstein:
Ari, “should I go on blood pressure meds?” He said, let’s ask the new medical AI. The AI prompt asked for my lipid panel and a dozen other personal medical inputs. And the AI program concluded that given my health characteristics that I should not take medication. In fact, 95% of American patients with my health characteristics are not taking medication.
When we changed some of the variables like increasing my cholesterol numbers, then 50% of patients were on blood pressure medication. Alternatively, if I had early-stage diabetes, then AI recommended taking meds for sure. With each medical recommendation, AI had a footnote where it referenced a journal article or study. And to your point, before the doctor gets a holistic view of the patient, the physician can use data provided by AI to make a better judgment call.
The second example is my daughter Hannah had a cough that has been going on forever, and I told her to investigate it and she hadn’t done anything. Dr. Ari Ciment is a pulmonologist. I said, we got a cough in the house. How long has this been going on? A year? She should see somebody. I said, I told her. I said, can we use AI to learn what we should be doing about her cough? We went through some basic questions, and then AI asked, “has she been in a country where there is prevalent tuberculosis?” And Ari said, I would’ve asked that question because this is a relatively remote risk, but great question AI. I said, no, she hadn’t.
Ari, what should we do? “Have her take Claritin for a couple of weeks and see, and sure enough it worked.”
Daron Acemoglu:
My doctor said the same thing to me when I had a cough without doing all the AI things,
Larry Bernstein:
Fair enough. But what I see is this complementary of humans using AI together to make more informed decisions.
Daron Acemoglu:
Absolutely. That’s what I mean by pro-worker AI. But the problem is that that’s much harder than it’s currently presented. That’s why I gave the coding and the radiology examples. It not only requires very reliable AI models; it requires AI models to be appropriately calibrated across a variety of contexts to be explainable and legible to human decision makers to evaluate AI recommendations.
That’s a very technically hard problem. Could it be that things are going very fast with AI, and all past experiences will not be a good guide? Absolutely, that’s a possibility. But let me give you another example from healthcare electronic health records. It’s been hard and costly to do the electronic health records, and they still do not work. And there’s still so many problems in sharing information across hospitals even within the same network.
Larry Bernstein:
Do we regulate medicine poorly so that hospitals and doctors cannot properly share patient medical information?
Daron Acemoglu:
Some of it is disclosure. They must jump through other hoops. MIT, which is where I see my doctors are in the same network, they’re on the same software, but they still cannot share. Every time I go, I must do the same blood tests and x-rays because they can’t share. It’s costly to go to those systems and still, we’re still not getting the fruits of it. It’s going to go there. but it’s very slow.
I’m not an AI pessimist. AI is going to have a lot of capabilities. It’s just going to be very slow compared to expectations.
Colin Teichholtz:
I think AI will be similar to prior waves of technology. If we go back to the middle of the 19th century and start having mechanized tractors, and at that time a huge percentage of the workforce was involved in agriculture. And as we got tractors, we didn’t necessarily try to adapt those technologies to be worker friendly so that we could preserve jobs. It turned out there were lots of other jobs for those people to do, and the market was very efficient using that human capital in some other way.
Daron Acemoglu:
100%.
Colin Teichholtz:
When I think about AI and the potential disruption, it’s not farmers and tractors anymore. It’s knowledge workers, but the market will sort itself out. And if AI initially displaces a bunch of people with a lot of valuable human capital, the market will figure out how to reallocate those people. And I don’t lose a lot of sleep.
Daron Acemoglu:
You should lose a lot of sleep. Essentially, there are two theories, which is workers create jobs when they are looking for them, and the second is that you do need new technologies, new investments to do that.
Things worked out reasonably well during the transition out of agriculture where 60% of the workforce was in agriculture in 1850, then by 1930. It was less than 5%. But what happened during that process is that we had a tremendous revolution in manufacturing and services as well. Some of it was completely driven by new technologies and new organizations. There were no clerks in the United States, no researchers, no engineers in the 1860s. Then the whole manufacturing sector was full of clerks, engineers as well as technical workers.
That didn’t happen by itself. The danger is if you don’t create those jobs, you don’t have these technologies, and you still dislocate agricultural workers. And that has happened many times around the world in different times. So that’s the danger now.
When the first stage of the British industrial Revolution happened with textile innovations, that’s exactly what happened. Real wages of weavers fell by about two-thirds in 40 years. In many developing countries, when you had the same transition, the urban centers did not create similar jobs and you had a huge increase in unemployment and underemployment.
AI is a general technology, so it’s going to affect every sector of the economy. So, if there is this potential of making AI pro-worker and we miss it, then it is going to exacerbate the problems. And on the other hand, if indeed there is potential for pro-worker AI and we make it realize, then it will make it much smoother.
Larry Bernstein:
Following Colin’s point, weavers lost their jobs but the number of textile workers that were employed in Liverpool and Manchester grew by a multiple. You now have so many kinds of suits to try on. In the old days, Daron, you had a mediocre suit.
Daron Acemoglu:
How do you know my wardrobe?
Larry Bernstein:
Because of that technological improvement it’s inexpensive, it probably costs you less than an hour’s wage.
Before it cost you a week’s worth of wages. It’s been an incredible productivity enhancement, we should be overjoyed, those guild weavers lost their jobs.
Daron Acemoglu:
And the question is exactly like Collin’s, did that happen automatically or did it require the right investments, or the element of choice was critical. For the cost reductions in textiles, probably a lot of it happened automatically that even if we did not make the right innovations in other sectors, the cost of textiles would’ve declined. But that cost decline would’ve been accompanied by mass unemployment or much lower wages. And that’s the problem.
Colin Teichholtz:
That’s a very pessimistic take on the value of the human capital stock and how markets work, that if you have all these people who suddenly become under or unemployed and they have skills that the market will not find a way to use them.
Daron Acemoglu:
Why didn’t it happen in the US over the last 40 years? If you look at the real wages of workers without a college degree, especially those with a high school degree or less, they have declined in real terms. The US economy, despite its amazing dynamism, didn’t create jobs for these workers that were comparable to what they had. Now you can say that’s because these workers were not adoptable or they didn’t have the right skills. Perhaps the future workforce will have the right skills. But again, that brings the element of choice.
The very important part of that story, and exactly why it didn’t happen in the 1950s, but it happened in the 1980s or 1990s, is because in the 1950s and 1960s, there were workers who lost their jobs, but there were other jobs with organizations that valued and used those skills. I think there is no guarantee that it will happen.
Lev Mikheev:
I have my own healthcare business where we try to treat healthy people, so we don’t wait until your blood pressure goes up or Hannah starts coughing. We work on a program so that you don’t get a cough or high blood pressure.
That requires a lot of analyzing a lot of data and bringing people from different disciplines together. It turns out the current doctors and medical school education don’t prepare them well, but we use the AI agents that do a much better job than doctors. But you need an integrator. You need a programmer who creates this program for people who read the reports. But before they were written by doctors who weren’t doing a very good job, and now they’re being written by AI agents.
The one point I also want to make is that we don’t need the latest AI, we don’t need Gemini 5.0. We don’t need GPT-5, GPT-4 is more than enough. For what we do, chips do not depreciate very fast. You can run on old chips for a long time, but people will be replaced.
Daron Acemoglu:
In the address I gave to the American Economic Association a couple of years ago, I highlighted areas that I thought where the choice of technology was not always going in the right direction, and one is preventative versus curative healthcare. We underinvest in preventative healthcare in the United States. That requires better vaccines, a human touch between the care team and individuals. It requires a different organization of healthcare insurance and AI incorporation. That’s also very important in terms of whether you need the latest models or not. I think that depends on the field.
I’m also working with technologists not in healthcare but in other areas. And some of them, what you need to do is well inside the frontier requires understanding context. Even frontier models are falling short so that there’s a big difference between GPT-5 versus GPT-4 it’s going to depend on where you are, but for many things that are essentially pattern recognition based you don’t need the latest models.
Alan Freedman:
I’m with you that this is going to take a long time. My professional job is to invest in technology companies and over the last 20 years, I’ve invested in private companies, including Facebook when it was private, Alibaba, when it was private, WhatsApp before it got bought by Facebook. We were fortunate to invest in OpenAI before ChatGPT came out. And I was one of the first core group who saw the prompt for the first time. And Sam Alman told me, ask AI anything. I’m like, what do you mean? And my point on bringing all this up is we’ve seen these incredible private businesses go from zero to the moon in a short period of time in terms of hyperscale growth in early adoption. Now, most adoption has been consumer for limited use cases,
Daron Acemoglu:
100%.
Alan Freedman:
And then, over a very long period, it’s figuring out, what’s our profit model and what’s our sustainable mode? And all the venture-backed businesses use the same playbook, which is the mode it is going to be scaled and adopted. We’ll figure the rest out later.
And spend and spend to keep that. I think one of the elephants in the room is that that playbook works with new businesses. At the same time, you’ve got old businesses trying not to get disrupted.
At the top S&P tech companies, there’s this massive turnover within the industry, seven of the 10 biggest technology firms didn’t exist 15 years ago. There’s a competitive aspect among these tech firms to spend, spend, spend on AI. But it’s like pushing on a rope because some of the examples you brought up like cybersecurity. There is no cybersecurity for large language models. The adoption by an enterprise of large language models is largely with blinders on. They don’t understand that they’re bringing inside their firewalls models that have huge holes in them where all the hackers are currently exploiting.
Daron Acemoglu:
And there’s going to be a lot of data leakage too.
Alan Freedman:
China, North Korea are already listening inside those models and there’s no education about it. At every cybersecurity company that’s public, the investments they’ve made to fix the window breaking is paltry. All they’re doing is putting window dressing on it and saying we’re now in the cybersecurity large language model protection business. But we know from investments we’re making in that area that there’s nothing underneath the hood yet.
And they’re years away from having a credible solution. So how can enterprise AI adoption really happen without that? How can AI adoption at enterprise happen without an AI architecture? The minute you get outside of the apex companies, they don’t have the teams to think about AI infrastructure.
Daron Acemoglu:
Alan, I agree with you 100%. I’ve been wanting to look at AI adoption compared to the internet. It’s much faster for AI by businesses for existing processes. But where I think AI will be not as fast is using internet for new businesses and new services.
Alan Freedman:
I’m incredibly optimistic about the impacts of AI on drug discovery, autonomous driving. It goes on and on and on because we’re seeing the shoots of it now, but it’s just going to take a long time.
Daron Acemoglu:
I don’t subscribe to laws in social science, but there’s this thing called Amars’s law, which is you always overestimate the impact of technology in the short run and underestimate in the long run. I think that’s exactly our position now. I don’t know whether we are underestimating or overestimating the long run effect, but I agree it’s going to be major. But yes, I think we are overestimating the short-term effect.
Larry Bernstein:
How will AI result in greater inequality?
Daron Acemoglu:
That’s super important. Studying the historical evolution of technology, you see the adverse distributional effects bundled with beneficial productivity effects, both in history and in the present. But there is the AI claiming this time is different. Because in the past, everything has worked out well, if you look at the British Industrial Revolution, there’s really two very different phases. From 1750 to about 1840s, you have a huge increase in inequality. Wages are stagnant or declining. Work hours are increasing, working conditions are getting worse. So, it’s very difficult to find any evidence that broadly speaking, workers are getting better. Although there are some industrial centers where things are improving a little bit.
And then from 1840s onwards, things start improving much better, both in terms of working conditions, working hours, and real wages. So, it really depends on how you’re using technology and the broader institutional context in which it’s embedded. That’s also the lesson for the current AI. Its economic impacts generally, but also its effects on wages and employment are going to depend on how it’s used and the broader regulatory framework that develops around AI.
Every technology is regulated. Industrial organization is the heart of every technology and is subject to antitrust. It’s always subject to contract law. There is a complete area of contract law that we just have not even touched.
So, there are two things that I want to point out regarding regulations. One is you’re going to have an AI stack which is to have foundation models, intermediate applications on top of them, and then consumer or business facing applications. So how that stack is going to be supported contractually, for example, where liability is going to lie and what are the obligations of different stacks, that’s completely unworked out.
The second is data. I think it is obvious that data is going to be the lifeblood of AI. Nobody would think that AI data is going to be less important than say, real estate in the future. But imagine how we enforce property rights. We protect real estate. But for data, we have none of that. So there needs to be an infrastructure of data where people own and control data, perhaps collectively, perhaps individually. So, all of that is still to be worked out and then all the antitrust stuff. So those are going to be quite critical and not just for which businesses benefit from, that’s an important dimension of inequality, but also capital versus labor, different types of skills, different job opportunities.
I think we’ve lulled ourselves into a false sense of security that sometimes the alarmists do the most damage because they talk of mass unemployment coming in the next few years. That’s completely out of the question. There won’t be people in the Great Depression. But if you look at again, the last 40 years, look at the demographic groups whose skills became undervalued. Their employment/population ratio decreased quite significantly. So, some groups being priced out of the labor market is not out of the question. Again, it won’t be mass unemployment, but it could have very disruptive effects.
Larry Bernstein:
Thanks to Daron for joining us. If you missed the last podcast, the topic was The Democrats will Rebound.
Our speaker was John Sides who is the Chair of the Political Science Department at Vanderbilt and the author of the book entitled The Bitter End: The 2020 Presidential Campaign and the Challenge to American Democracy.
John explained with detailed voting data why Trump won the 2024 presidential election. John also made predictions for the upcoming midterm elections.
You can find our previous episodes and transcripts on our website
whathappensnextin6minutes.com. Please follow us on Apple Podcasts or Spotify. Thank you for joining us today, goodbye.
Check out our previous episode, The Democrats Will Rebound, here.





