AI Made Friendly HERE

Making money from AI — After DeepSeek

This is an audio transcript of the Tech Tonic podcast episode: ‘Making money from AI — After DeepSeek’

Madhumita Murgia
Hi. I’m Madhumita Murgia, AI editor at the Financial Times. And we’re going to start this episode of Tech Tonic in a night school in Tottenham, North London.

Vahap Can
All right, guys. Welcome back.

Madhumita Murgia
Which may be a strange place to start a show about Silicon Valley’s biggest tech craze, but bear with me.

Vahap Can
Today, we’re going to talk about a little bit more advanced technique related to prompt engineering, OK?

Madhumita Murgia
What you’re hearing is the Wednesday evening generative AI prompt engineering class at Capital City College. It’s a free six-week adult education course. And the idea is to teach students how to ask questions and instruct generative AI tools like ChatGPT to get the best possible results.

Vahap Can
When you’re using multistep prompts in, you need to make sure the transitions are clear, OK? So . . . 

Madhumita Murgia
Vahap Can, the instructor, spoke with our producer Josh about the class.

Vahap Can
We’re based in Tottenham and majority of the students are from local, so some learners are employed. We had students that came from a marketing background. There was a student which was in a very senior position — and she wanted to learn more about AI, how it can help her with day-to-day activities — people that were retired and just wanted to know about AI. There’s people in different sectors . . .

Madhumita Murgia
Courses like this one are becoming more ubiquitous and popular. And that’s testament to the incredible impact that generative AI has had in the two short years since it’s been available to the public. The college developed the course to respond to industry demand, but also demand from college-aged students.

Vahap Can
Students, we could say, they’re using ChatGPT to create a particular work like essay, for instance. The core principle behind when we were designing this course is for us to teach how to use ChatGPT to actually learn a particular thing. However, we then thought about professionals. So teaching students prompt engineering to basically semi automate the tasks in the day-to-day work activities.

Josh Gabert-Doyon
So wait, so this was coming out in part because you guys were getting students who were using ChatGPT for essays.

Vahap Can
Yeah. For instance, you would see them use it for reports, for essays. So rather than learning, using it as a tool to learn, they (inaudible) and use it as a tool to do the work for them. Essentially, this is a great tool for learning, for instance.

Madhumita Murgia
Courses like this one are meant to lead to more productive use of generative AI tools. If people keep going to those night classes in north London, it’s because, like all of us, they’ve been told that Gen AI is set to revolutionise the way all of us live and work. That’s certainly the assumption underpinning the business model of companies making this technology: Google, OpenAI, Microsoft and Meta. So, I wanted to have a poke around that business model. Investors are throwing money at these companies off the promise of generative AI. But then, along came China’s DeepSeek in January this year. DeepSeek matched its expensive American competitors with its own much cheaper version of the software, which begs the question: has everyone been grossly overvaluing the AI pioneers of Silicon Valley?

[MUSIC PLAYING]

$5.6mn — that’s how much it apparently cost DeepSeek to develop its large language model. Although that figure likely doesn’t include costs such as salaries and semiconductors, it’s still a fraction of the money being spent by the likes of OpenAI or Anthropic on their models.

Cristina Criddle
OpenAI is estimated burnt through about $5bn last year, which is almost as much as it raised.

Madhumita Murgia
Cristina Criddle reports on tech for the FT from San Francisco.

Cristina Criddle
And the core drivers of its spending are talent — salaries are huge in AI. Entry-level jobs are at $300,000 at OpenAI, according to the estimates I’ve seen online, which is just a crazy amount. And then there’s also the cost of compute or computing power, basically the things that you need to be able to train the models and run it. A lot of the companies anticipate that these costs are going to go up over time.

Madhumita Murgia
And that’s what makes DeepSeek’s new model such a revelation.

Cristina Criddle
What DeepSeek has done is really, come in here and shake everything up by being able to develop a reasoning model. So one that has cognitive abilities that are similar to how humans reason. They’re very similar to some leading models on the market from OpenAI and Meta. And they say that they’ve been able to do this at a fraction of the cost, with just single-digit millions to be able to train it, although those costs are sort of unclear. But if they can do that, then that just brings the costs down for everybody. And one co-founder was saying to me the other day that by making this open-source, by doing this so cheap, DeepSeek has really levelled the playing field because who is gonna pay more for a model that can do effectively the same thing when there is an open-source model out there on the market?

Madhumita Murgia
The fact that DeepSeek’s model is partially open-source means that anyone can use it for free. And that’s a problem for the US competitors who charge to access their models. It all cast doubts on the viability of the close-source business model of many leading AI firms. I wanted to drill down a bit into the details. Is this technology as much of a cash cow as its inventors claim it is? At the moment there are a few big revenue streams. The first is just simple consumer subscriptions.

Cristina Criddle
They often have a free tier, which anyone can use. And then you get a better model or better performance, or perhaps more questions you can ask in a day if you pay a little bit more.

Madhumita Murgia
But what’s a lot more lucrative for AI companies are business subscriptions.

Cristina Criddle
One company will pay a block amount for users across their company to use either ChatGPT, Claude from Anthropic, Gemini from Google. And actually, with the Google and Microsoft offerings, sometimes that can be bundled into some of the other software that they offer.

Madhumita Murgia
And then bleeding-edge AI companies also sell access to their models to other companies who can build applications on top of them.

Cristina Criddle
And they do this thing called fine-tuning, where they use the larger models to distil into smaller models and personalise it for their use cases, for their specific tasks that they want to do, or train on specific data that the company has.

Madhumita Murgia
On top of those user revenue streams, there’s also another very lucrative pathway to profit, one that’s been tried and tested by the world’s biggest tech companies, and that’s advertising.

Cristina Criddle
What I’m seeing is lots of writing and creative applications as well. I’ve seen lots of that in advertising. We’ve written that OpenAI has explored advertising as well in a very similar way where it would surface these brands in search. However, OpenAI has said that’s not imminent, but it’s definitely something it’s considering. And when you think about the sheer amounts of people using ChatGPT every day, you know, it’s hundreds of millions a week. And if they can get these sort of sponsored ads in front of people in a way that Google’s original search engine did . . . It proved very profitable for Google, so perhaps it will be the same for these AI companies.

Madhumita Murgia
Yeah, and Meta is already using AI for its existing massive advertising business, right. And we know that targeted advertising has been the whole basis for Meta and Google’s success.

Cristina Criddle
A hundred per cent. And I think in advertising, it’s just fascinating because you’re seeing them using the LLMs to really be able to help target and test advertising in real time. So what they’re able to do is suggest who a target audience for something might be, and it might not be what you would expect. For example, a Meta executive told me that if you had luxury watches, the idea might be that you should show them to men of a certain age and certain income, when in fact the LLM suggested that you show them to women instead because they were more likely to buy these watches as a present for a man in their lives.

Madhumita Murgia
Yeah, it’s kind of interesting how it is able to capture the nuance of buying psychology rather than the more crude data targeting that we’ve seen over the last decade or so.

Cristina Criddle
Absolutely. Meta has used its AI to be able to generate different backdrops for advertising. So say, it’s a lipstick, it would put a different background on the advert and it’s found that that’s really effective because once you’ve seen the same advert on Instagram more than a few times, you just get sick of it and it becomes a negative association with it. However, with their AI, they can make it look different each time, which means that you are getting that brand recognition without the user getting tired. And I know as a consumer I see the same adverts again and again and it just makes me hate the product.

Madhumita Murgia
So the brightest brains of our generation might be working on even more mind-melding advertising tactics. Still, ads and subscriptions aren’t exactly a groundbreaking new business model. And in the meantime, you have the major AI companies locked into fierce competition over users. I’m really interested in how these competitive dynamics are gonna play out. Will any one AI model gain an edge? Even before DeepSeek’s arrival on the scene, the companies making AI models were struggling to differentiate themselves. They’re all essentially selling the same thing.

Anton Korinek
It looks like this is really becoming a crowded field, and there are many players pushing into that market, producing outputs of similar quality.

Madhumita Murgia
Anton Korinek is a professor of economics at the University of Virginia who studies AI. The issue, Korinek says, is that the model builders can’t charge more for these services because the field is so competitive. When OpenAI launched ChatGPT in late 2022, it was the only company with such an advanced chatbot. Today it’s one of many.

Anton Korinek
It turns out pretty much anybody who is willing to spend a sufficient amount of money is able to produce a GPT-4-level system. And there are many players pushing into that market producing outputs of similar quality. And if you are a business user, whether you use one or the other, makes very little difference to you. And that means you just go for whatever’s cheapest. And essentially, the incumbents have very little moat.

Madhumita Murgia
What Anton means by very little moat is that there isn’t much separating the features and performance offered by the likes of ChatGPT or other AI companies from those of potential challengers. Generative AI companies are basically in a price war. That’s why Korinek says these companies have set their sights on a much more ambitious goal.

Anton Korinek
They are essentially betting on these models rapidly becoming smarter and reaching something like human-level intelligence and beyond, within the next couple of years. And if that materialises, then they would have AI agents that are human-level intelligent at their disposal and that can pursue functions throughout the economy and generate very high financial returns based on that.

Madhumita Murgia
Aiming for human-level intelligence, what’s known in the field as AGI, makes today’s lack of profits less of an issue because whoever reaches AGI first would be able to reap huge financial rewards, at least in theory. And in the meantime, well, investors might just have to be patient. More on that after the break.

[MUSIC PLAYING]

More than two years on from the launch of ChatGPT, the big nagging questions are about the applications and the financial sustainability of generative AI. The platforms behind the leading AI tools say that the technology is going to transform our economy, but they’re facing high costs and a hyper-competitive playing field. Both of those challenges are only going to get more vexing.

Back in the classroom at Capital City College in London. Vahap Can is teaching his night school students about AI drift. Basically, when you ask AI a question and it starts to go off topic.

Vahap Can
As the example I’ve given you last week, AI is like driving a car. So prompts are your hands controlling the steering wheel. If you don’t control the steering wheel, of course the car is gonna start drifting.

Madhumita Murgia
A good prompt engineer can stop AI from going off course. Students in the class were keen to play around with novel uses for the technology.

Student 1
I typed in asking to describe what quantum computing is, and to limit such response to about 100 words.

Vahap Can
And has it come up with anything good?

Student 1
It has indeed, yeah. It’s got a nice short description of what quantum computing is.

Student 2
I’ve asked it to describe the technological innovation of using surplus heat from computation and data centres, and that’s the initial problem. And I was gonna revise the second prompt based on that output.

Student 3
For instance, I want to write an angry letter to the landlord about the high renting rate. Write an . . . uh no. Send a letter, no, email to my landlord about the ridiculous rent increase, and set the tone to be more sarcastic.

Madhumita Murgia
An angry letter to the landlord is just one of the many use cases that have come out of readily available AI. The companies offering these generative AI tools may not be making any profits, but the usage of this technology is widespread. Here’s Anton Korinek again.

Anton Korinek
Right now, they’re not profitable. Now, the flip side of that is that all of us who are consumers of this technology essentially get intelligence as a commodity, as some in the tech industry call it. We get really cheap access to amazing systems, and that’s gonna deliver productivity benefits throughout the economy.

Madhumita Murgia
Right now, the price war between AI companies means consumers are able to get AI for cheap, and AI companies hope that prolific usage will eventually translate to a profitable business model. But that hasn’t happened yet. Ultimately, it might come down to something called Jevons paradox.

Anton Korinek
Jevons paradox is the observation that for some products, when you lower the price, you can actually earn more money on it. This is, for example, something that Henry Ford already speculated on in the 1910s. He said, well, let me lower the price of my cars. (Car horns) And then ultimately, he sold many more of them because they became more affordable. So when the price goes down, the demand goes up by more than what you lost in revenue from the price cuts.

Madhumita Murgia
Jevons paradox became particularly newsworthy when Microsoft CEO Satya Nadella referenced it in relation to DeepSeek’s breakthrough in January.

Anton Korinek
The CEO of Microsoft suggested that the efficiency gains that DeepSeek had achieved through their new model architectures is going to lead to something like Jevons paradox. So I think he suggested, well, these new AI systems, they’re becoming so much more efficient. You can now run 10 copies of a GPT-4-level system on the same amount of compute that you could previously run only one on. And these are going to be so useful that people are gonna buy more than 10.

Madhumita Murgia
And Anton, what do you think? Are you convinced that Jevons Paradox is going to save the AI industry?

Anton Korinek
In the short-term, do I buy that? Probably not right away. But in the medium-term, I think it’s right. In the medium term, we are gonna have much more efficient and much more powerful AI for a cheaper price. And that’s gonna accelerate to further advancement of AI.

Madhumita Murgia
Jevons paradox is one potential outcome. It could mean low subscription prices continue. And then eventually the costs go down and the big AI companies start turning a profit. But there’s another way that things might pan out. The massive upfront costs might mean that companies cut back on the quality of the product.

(Sound of an airplane flying overhead, followed by an airport announcement)

Alex Chalmers
About sort of a year ago, Nathan Benaich of Air Street Capital and I co-wrote this essay where we compared these businesses to airliners.

Madhumita Murgia
Alex Chalmers is a researcher, writer and co-author of the State of AI Report. He also used to be a venture capital investor with Air Street Capital.

Alex Chalmers
They have to put a load of like capex upfront before you receive a sense of revenue to build kind of relatively undifferentiated product. And only certain ways of improving your kind of efficiency and margins, and these are often by worsening the consumer experience. So airliners do this by layering in more fees and slowly taking away the freebies and the free stuff, the frontier model companies that might be slowly reducing free usage allowances. And the other parallel between, I think, frontier model companies and airliners is that to kind of win people over, they have to keep engaging in these price wars to win hearts and minds over.

Madhumita Murgia
It’s not hard to imagine a world where generative AI is run like a budget airline, with users given crummy service and forced to pay for extra features. In the end, it might not be the technology itself that differentiates one frontier model from another, but the consumer experience. The winners and losers of the AI race might come down to who can manage the best service. I asked the FT’s Cristina Criddle how she thinks the competition between these foundation model companies will shake out.

Cristina Criddle
I think what’s gonna be really interesting is you are going to see very fierce competition and you’re gonna see these models kind of levelling the playing field quite quickly. I think the user interface is gonna be really important here. What’s gonna make users want to come back for more, be loyal to your platform. And that might not necessarily be a platform that’s run by a model maker. It might be something like Perplexity, which has, is called a wrapper where you can switch in the different models. And that’s something that Microsoft has talked about a lot as well. Having these models as a commodity and actually being able to build them, having the applications on top of them, that’s where the real, like, magic happens — and that’s what’s gonna bring consumers and businesses back. So I think we’re gonna see a lot more focus on the aesthetic of these things, the orchestration, as they say technically, but really like how people are interacting with it and how delightful it is to use and how effective it is for the things you want to use it for.

[MUSIC PLAYING]

Madhumita Murgia
In this series, we’ve heard about the uses for AI and the struggle for a killer app. We’ve also broken down why there’s no obvious pathway to profitability for companies that develop AI models. In the end, these companies may end up reverting to advertising, a solid revenue stream for tech platforms like Meta and Google. In some ways, that might signal that the AI business model doesn’t have to be as revolutionary as we imagined. Ultimately, I think it’s a sign that this technology is more of a highly inventive tool than it is a hard break from the past.

Amid intense competition, what might happen is that the models produced by today’s frontrunners will fade into the background, the same way that we hardly noticed the underlying tech in our smartphones. We don’t yet know if AI will become profitable in the future. But even if it is, it may not be the companies that have come to define and dominate the sector today that will make all the money.

You’ve been listening to Tech Tonic from the Financial Times. My name is Madhumita Murgia. Our senior producer is Edwin Lane, and our producer is Josh Gabert-Doyon. And our exec producer is Manuela Saragosa. Breen Turner, Sam Giovinco and Joe Salcedo are our sound engineers. Original scoring by Metaphor Music. Our global head of audio is Cheryl Brumley.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird