I think this is intentional by Altman. He’s a salesman, after all. When there is infinite possibility, he can sell any type of vision of future revenue and margins. When there are no concrete numbers, It’s your word against his.
Once they try to monetize, however, he’s boxed in. And the problem with OpenAI vs. Google in the earlier days is that he needs money and chips now. He needs hundreds of billions of dollars. Trillions of dollars.
Ad revenue numbers get in the way. It will take time to optimize; you’ll get public pushback and bad press (despite what Ben writes, ads will definitely not be a better product experience.)
It might be the case that real revenue is worse than hypothetical revenue.
Because Altman is eying IPO, and controlling the valuation narrative.
It's a bit like keeping rents high and apartments empty to build average rents while hiding the vacancy rate to project a good multiple (and avoid rent control from user-facing businesses).
They'll never earn or borrow enough for their current spend; it has to come from equity sales.
with very particular exceptions at the high end (like those 8-figure $ apartments by Central Park that are little more than international money laundering schemes) this doesn't really happen irl
an example: $5000/mo apartment generates $60,000 a year; forgoing one month of rent means you have to now generate $60,000 of revenue in 11 months, which in a bad market will likely not rent for $5450 if it didn't rent for $5000. Your mortgage still continues to pile up along with insurance and taxes, so you can't escape the hole.
Apparently they've declared a code red, because competition has got so hot. They're definitely going to do ads, but they may have to put it off. They can't afford ads at the moment ironically.
https://www.theinformation.com/articles/openai-ceo-declares-...
But honestly...he's not wrong. I think ads in ChatGPT, Gemini, and Claude are going to absolutely dwarf subscription revenue.
I don’t think we’re close to AGI, but I do think ChatGPT has the potential to be the most successful ad platform in history. And I think they’ll probably succeed in building it.
It's the other way around. It was a non-profit before. He even got kicked out.
And with that, I will never read anything this guy writes again :)
it is, for the agents of the shareholders. As long as the actions of those agents are legal of course. That's why it's not legal to put fentanyl into every drug sold, because fentanyl is illegal.
But it is legal to put (more) sugar and/or salt into processed foods.
That's why i used the sugar example - it's starting to be demonstrably harmful in large quantities that are being used.
I am against preventative "harmful" laws, when harm hasn't been demonstrated, as it restricts freedom, adds red tape to innovation, and stifles startups from exploring the space of possibilities.
Starting?
What a way to look at the world...
See early 2000s Google as a model for a righteous company and public perception of it as evil and subsequent antitrust litigation today, or what happened to companies involved in Opioid trade and subsequent effect on shareholders value
Shareholders are still human beings and the power they wield should be subject to public scrutiny.
> it is, for the agents of the shareholders
Even if we care solely only about shareholders, in extreme cases it is not beneficial also for them
However companies have balancing factors which are other than maximizing short term profits, such as moral image
All life grows and consumes as much as it can. It's what makes it life. "Control" happens when there's more life contesting the same limited resources, and usually involves starvation, but if the situation persists on evolutionary timescales, then some life adapts to proactively limit growth. Then, if some of that adapted life unadapts itself, we call that "cancer", which I think is what you were going for.
which is exactly what the law of the jungle is. And guess who sits at the top within that regime?
Humans would devolve back into that, if not for the violence enforcement from the state. Therefore, it is the responsibility of the state to make sure regulations are sound to prevent the stab-stab-stab, not the responsibility of the individual to not take advantage of a situation that would have been advantageous to take.
of course not. Nobody does.
However, what happened to your civic responsibility to keep such a society to make it function? Why is that not ever mentioned?
The fact is, gov't regulation does need to be comprehensive and thorough to ensure that individual incentives are completely aligned, so that law of the jungle doesn't take hold. And it is up to each individual, who do not have the power in a jungle, to collectively ensure that society doesn't devolve back into that, rather than to expect that the powerful would be moral/ethical and rely on their altruism.
what i'm trying to imply is that every single actor, as an individual, are "bad-faith" actors. That's why it's only when collectively can each bad-faith actor be "defeated". But when society experience an extended period of peace and prosperity brought about by good collective action from prior generations, people stop thinking that such bad-faith actors exist, and assume all actors are good faith.
> I just do not want them in whatever society I have the capacity to be in
and you dont really have the choice - every society you could choose to be in, with the exception of yourself being a dictator, will have such people.
in ancient times, you could banish people from the village
And that's why the government regulates stabbing.
I tried to let it stand because it was clear what you meant, but ultimately could not.
Not all people everywhere, but most successful businesspeople.
> It'd be so much more efficient to just stab-stab-stab and take the money directly.
It isn't though? If you do that then you get locked up and lose the money, so the smart psychopaths go into business instead.
joke- The World Council of Animals meeting completes with morning sessions with "OK great, now who is for lunch?"
> ... the answers are a statistical synthesis of all of the knowledge the model makers can get their hands on, and are completely unique to every individual; at the same time, every individual user’s usage should, at least in theory, make the model better over time.
> It follows, then, that ChatGPT should obviously have an advertising model. This isn’t just a function of needing to make money: advertising would make ChatGPT a better product. It would have more users using it more, providing more feedback; capturing purchase signals — not from affiliate links, but from personalized ads — would create a richer understanding of individual users, enabling better responses.
But there is a more trivial way that it could be "better" with ads: they could give free users more quota (and/or better models), since there's some income from them.
The idea of ChatGPT's own output being modified to sell products sounds awful to me, but placing ads alongside that are not relevant to the current chat sounds like an Ok compromise to me for free users. That's what Gmail does and most people here on HN seem to use it.
One could argue many users seem to prefer badly designed free products over well designed paid products.
sure, there are APIs and that takes effort to switch... but many of them are nearly identical, and the ecosystem effect of ~all tools supporting multiple models seems far stronger than the network effect of your parents using ChatGPT specifically.
A speculative example could be AI ends up failing and crashing out, but not until we build out huge DCs and power generation that is used on the next valuable idea that wouldn't be possible w/o the DCs and power generation already existing.
In the event of a crash, the current generation of cards will still be just fine for a wide variety of ai/ml tasks. The main problem is that we'll have more than we know what to do with if someone has to sell of their million card mega cluster...
It sounded vaguely like the broken window fallacy- a broken window creating “work”
Is the value of bubbles in the trying out new products/ideas and pulling funds from unsuspecting bag holders?
Otherwise it sounds like a huge destruction of stakeholder value - but that seems to be how venture funding works
The difference of course is that when a startup goes out of business, it's fine (from my perspective) because it was probably all VC money anyway and so it doesn't cause much damage, whereas the entire economy bubble popping causes a lot of damage.
I don't know that he's arguing that they are good, but rather that _some_ kinds of bubbles can have a lot of positive effects.
Maybe he's doing the same thing here, I don't know. I see the words "advertising would make X Product better" and I stop reading. Perhaps I am blindly following my own ideology here :shrug:.
There, fixed.
I would say that, on this topic (ads on internet content), Ben Thompson may not be as objective a perspective as he has on other topics.
If there are ads on a side bar, related or not to what the user is searching for, any adblock will be able to deal with them (uBlock is still the best, by far).
But if "ads" are woven into the responses in a manner that could be more or less subtle, sometimes not even quoting a brand directly, but just setting the context, etc., this could become very difficult.
This seems impossible to me.
Let's assume OpenAI ads work by them having a layer before output that reprocesses output. Let's say their ad layer is something like re-processing your output with a prompt of:
"Nike has an advertising deal with us, so please ensure that their brand image is protected. Please rewrite this reply with that in mind"
If the user asks "Are nikes are pumas better, just one sentance", the reply might go from "Puma shoes are about the same as Nike's shoes, buy whichever you prefer" to "Nike shoes are well known as the best shoes out there, Pumas aren't bad, but Nike is the clear winner".
How can you possibly scrub the "ad content" in that case with your local layer to recover the original reply?
Your examples of things that won't have ads, "complex reasoning, planning, coding", all sound perfectly possible to have ads in them.
For example, perhaps I ask the coding task of "Please implement a new function to securely hash passwords", how can my local model know whether the result using boringSSL is there because google paid them a little money, or because it's the best option? How do I know when I ask it to "Generate a new cloud function using cloudflare, AWS lambda, or GCP, whichever is best" that it picking Cloudflare Workers is based on training data, and not influenced by advertising spend by cloudflare?
I just can't figure out how to read what you're saying in any reasonable way, like the original comment in this thread is "what if the ads are incorporated subtly in the text response", and your responses so far seem so wildly off the mark of what I'm worried about that it seems we're not able to engage.
And also, your ending of "the sky's the limit" combined with your other responses makes it sound so much like you're building and trying to sell some snake-oil that it triggers a strong negative gut response.
How much ?
do you realize how much product placement have been in movies since...well, the existence of movies?
I've been sporting the same model of Ecco shoes since high school. 10+ models over the years. And every new model is significantly worse than the previous one. The one I have right now is most definitely the last one I bought.
If you would put them right next to the ones I had in high school you'd say they are a cheap, temu knock offs. And this applies to pretty much everything we touch right now. From home appliance to cars.
Some 15 years ago H&M was a forefront of whats called "fast fashion". The idea was that you could buy new clothes for a fraction of the price at the cost of quality. Makes sense on paper - if you're a fashion junkie and you want a new outlook every season you don't care about quality.
The problem is I still have some of their clothes I bought 10 years ago and their quality trumps premium brands now.
People like to talk about lightbulb conspiracy, but we fell victims to VC capital reality where short term gains trumps everything else.
I'm skeptical of this claim. Maybe it's true for some particular brand but that's just an artifact of one particular "premium brand" essentially cashing in its brand equity by reducing quality while (temporarily) being able to command a premium price. But it is easier now than at any other time in my life to purchase high-quality clothing that is built to last for decades. You just have to pay for that quality, which is something a lot of people don't want to do.
I frequently ask chatgpt about researching products or looking at reviews, etc and it is pretty obvious that I want to buy something, and the bridge right now from 'researching products' to 'buying stuff' is basically non-existent on ChatGPT. ChatGPT having some affiliate relationships with merchants might actually be quite useful for a lot of people and would probably generate a ton of revenue.
They fail to mention Google's edge: Inter-Chip Interconnect and the allegedly 1/3 of price. Then they talk about software moat and it sounds like they never even compiled a hello world in either architecture. smh
And this comes out days after many in-depth posts like:
https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
A crude Google search AI summary of those would be better than this dumb blogpost.
(Unless those writings are looking to dehumanize or strip people of rights or inflame hate - I'm not talking about propaganda or hate speech here.)
With that said there's no accounting for taste.
The Substack founders unofficially marketed it early on as “Stratechery for independent authors”.
Your analysis concerning the technology instead of focusing on the business is about like Rob Malda not understanding the success of the “no wireless, less space than the Nomad lame”.
Even if you just read this article, he never argued that Google didn’t have the best technology, he was saying just the opposite. Nvidia is in good shape precisely because everyone who is not Google is now going to have to spend more on Nvidia to keep up.
He has said that AI may turn out to be a “sustaining innovation” first coined by Clay Christenson and that the big winners may be Google, Meta, Microsoft and Amazon because they can leverage their pre-existing businesses and infrastructure.
Even Apple might be better off since they are reportedly going to just throw a billion at Google for its model.
eg Yuval Noah Harari, Bari Weiss, Matthew Yglesias
I can’t emphasize enough how bad Ben’s take is here. He needs to stop writing and starting doing something.
The belief that adding ads makes things better would be an extremely convenient belief for a writer to have, and I can easily see how that could result in them getting more revenue than other writers. That doesn't make it any less dumb.
Any use of LLMs by other people reduces his value.
Which as we know has nothing to do with reality.
He has said as much.
Discussing "innovator's dilemma" unironically is a fullstop for me.
That "change is inevitable and we all better adapt or die" is somewhere between axiomatic and cliché.
What is "innovation"? How do you define it? (Am honestly asking.) How do we get more of it? (I know this is an area of active research.)
I forced myself to reread and revisit Christensen a year or two back. I may not have looked hard enough, but I didn't find any evidence that he'd updated or expanded his thesis, corpus. IIRC, no mention of Everett's diffusion of innovation, of thesis from Design Rules: the Power of Modularity (an adjacent topic), no engagement with ongoing innovation research.
FWIW, my still poorly formed hunch is that "innovation" is where policy meets the cost learning curve meets financial accounting. With maybe a dash of rentier capitalism.
But I'm noob. Not an academic, not an economist. Deep down on my to do list is to learn how DARPA (and others) places their bets, their emerging formalisms (like technology readiness levels), how emerging tech makes the jump from govt funded to private finance (VC).
Enough of my babble. In closing, I'd like to read some case studies for the two most "disruptive technologies" of our times: solar and batteries.
Edit: "Sorry your husband lost the money you were saving for a house on stake.com, but here's your free Google search."
https://www.journals.uchicago.edu/doi/abs/10.1086/695475
not merely correlation but causation. the approach used here was part of a family of approaches that won the Nobel in 2012
another good one:
https://pubmed.ncbi.nlm.nih.gov/37275770/
advertising caused increases in treatment and adherence to medicine
the digital ads market is hundreds of billions of dollars, it is a bad idea to generalize about it.
that said, of course ben thompson or whoever, they're not like, citing any of this research, it's still all based on vibes
On the other hand, the advertisement and associated privacy-brokerage industries are a very different story
Spam alone (also advertisement) is quite annoying and destructive.
> At age nine, Jensen, despite not being able to speak English, was sent by his parents to live in the United States.[15] He and his older brother moved in 1973 to live with an uncle in Tacoma, Washington, escaping widespread social unrest in Thailand.[16] Both Huang's aunt and uncle were recent immigrants to Washington state; they accidentally enrolled him and his brother in the Oneida Baptist Institute, a religious reform academy in Kentucky for troubled youth,[16] mistakenly believing it to be a prestigious boarding school.[17] In order to afford the academy's tuition, Jensen's parents sold nearly all their possessions.[18]
> When he was 10 years old, Huang lived with his older brother in the Oneida boys' dormitory.[17] Each student was expected to work every day, and his brother was assigned to perform manual labor on a nearby tobacco farm.[18] Because he was too young to attend classes at the reform academy, Huang was educated at a separate public school—the Oneida Elementary school in Oneida, Kentucky—arriving as "an undersized Asian immigrant with long hair and heavily accented English"[17] and was frequently bullied and beaten.[19] In Oneida, Huang cleaned toilets every day, learned to play table-tennis,[b] joined the swimming team,[21] and appeared in Sports Illustrated at age 14.[22] He taught his illiterate roommate, a "17-year-old covered in tattoos and knife scars,"[22] how to read in exchange for being taught how to bench press.[17] In 2002, Huang recalled that he remembered his life in Kentucky "more vividly than just about any other".[22]
Child labor is super common around these parts, especially on family farms. I grew up working on my family's tobacco farm just like pretty much everyone else. My uncle was even nice enough one summer to give me $20 a week for weeding and bug removal from a 5 acre farm. I thought it was so much money. I remember saving up to buy those bargain bin "300 Games" type CDs at Walmart.
I personally think it's fine for children to work on their family's business as long as it doesn't impact their schooling or normal childhood activities. It is a fine line to walk, I don't believe I missed out on anything like after school activities, but that was largely because there aren't too many of those opportunities deep in the mountains in Kentucky. I say it's a fine line, because it's easy to see a scenario where children are put to work in, say a family restaurant, and prevented from doing after school activities like sports or clubs, and miss out on part of the well rounding of an education.
I certainly don't support children working for third parties that then profit off of their labor. In those cases, there is no way to align the incentives to protect the child.
Bruce Wayne was rich. Those fancy suits and cars aren't cheap.
Tony Stark was rich. Fancy robot tech isn't cheap.
It takes money to actually do anything. Super Hero stories about rich people. We idolize making a difference with the money.
In the UK we don't tend to idolise the rich so much. Not to say it doesn't happen, but in popular culture positive depictions tend to be limited to period portrayals of idealised aristocracy (and even then it's rarely shown as heroic), with contemporary wealth usually treated as a dubious virtue.
Probably rooted in the 'self made man', 'rugged individualism'. Go West to make your fortune. We forget the US is pretty young, and still has a lot of culture based on colonizing the west, taming the wilderness to find your riches.
It makes serious revenue outside the AI bubble.
Google has (much) more money in cash on hand than OpenAI has raised.
Of course, there are a few other MegaCorps out there, who make money in other places, while having a serious stake in doing well out of AI, but I'm with you. Google FTW.
Nvidia selling shovels to the miners is great, but the analogy falls down if the gold mines are bottomless, and the cost of the tools to mine them trend to zero.
This greatly disincentives me from visiting chatgpt or other competitors. Google is probably the most popular AI service right now. I don't see how they can be beat in this regard.
Not knowing the name gemini is actually impressive. How many people know Acrobat Adobe is called Acrobat? Its just Adobe. They subsumed the pdf market so much, you dont even realize you have a pdf reader. You just call it by the company name. Same with Xerox'ing copies or whatever. I think the hype cycle is for the big flashy AI companies with eccentric-style CEOs saying carefully crafted "outrage PR" sensationalism, but google is slowly eating everyone's lunch right now. Joe and Jane internet user are already trained and loyal to google and using Gemini probably a dozen or more times a day.
I'm a little surprised at myself because I just have been using google for nearly all AI stuff. For deeper dives into code I may use another tool, but gemini is good enough for most uses. I think this war is google's to lose. If they continue doing this strategy, they will 'win' the AI market, or at least a good part of it. Then AI will become just another boring feature in your search or whatever the same way people used to agonize over PDF readers, but now its just a boring thing built into your browser or, if at work, its "The Adobe."
Gemini is less a consumer brand name and a more a brand name for those of us who care about models.
my mom who barely knows how to use her own phone googles stuff and recently showed me tips she got from hitting ai mode after searching something on google.
my dad uses gemini built into email and sheets and chrome.
just 2 anecdotal examples. oh and ai pro subscription i bought applies to my whole family for 20 a month and comes with 2tb storage.
insane value. and again google can do this and is still highly profitable all whilst competing on having the best model.
this shit ain't close.
my mom who barely knows how to use her own phone uses ChatGPT every day
just an anecdotal example.
this shit ain't close.
=========
but seriously, ChatGPT has wayyyy more name recognition than google's AI.
Gemini Pro comes with my Google Workspace subscription, which means that it doesn't train on my data. It also has NotebookLM and it's in Google Sheets.
It's on my Android phone as well. It can summarise Youtube videos without getting throttled. And when I do a regular Google search (which I still do quite a lot) Gemini is there as well and I occasionally ask it followup questions via the search interface.
I'm finding it rather hard to believe nobody else is talking to Gemini.
It seems to just be worse at actually doing what you ask.
I feel like it would be advantageous to move away from a "one model fits all" mindset, and move towards a world where we have different genres of models that we use for different things.
The benchmark scores are turning into being just as useful as tomatometer movie scores. Something can score high, but if that's not the genre you like, the high score doesn't guarantee you'll like it.
You had Watcom, Intel, GCC, Borland, Microsoft, etc.
They all had different optimizations and different target markets.
Best to make your tooling model agnostic. I understand that tuned prompts are model _version_ specific, so you will need this anyways.
https://thezvi.substack.com/p/gemini-3-pro-is-a-vast-intelli...
As someone who works on such low-level software at a hyperscaler I am skeptical of this comparison. The difference between AMD and Intel is really not that great, and in the biggest areas, open source software (e.g. kernel (especially KVM) and compilers) is already fully agnostic, in large part thanks to Intel and AMD themselves. Nobody in this space is gonna buy an x86 CPU without full upstream Linux+KVM+LLVM support.
If breaking down the CUDA wall was the same order of magnitude a challenge as Intel vs AMD CPUs, I would think we would already have broken down that wall by now? Plus, I don't see any sign of Nvidia helping out with that.
I don't know anything about CUDA though so maybe I'm overestimating the barrier here and the real reason is just that people haven't been sufficiently motivated yet.
Or consider things like CPU core allocators, which now need to be CCD-aware when allocating cores within a CPU to a container.
That's the basis for his conclusions about both OpenAI and Google, but is it true?
It's precisely because uptake has been so rapid that I believe it can change rapidly.
I also think worldwide consumers no longer view US tech as some savior of humanity that they need to join or be left behind. They're likely to jump to any local viable competitor.
Still the adtech/advertiser consumers who pay the bills are likely to stay even if users wander, so we're back to the battle of business models.
The problem for alternatives is they have to answer the question of why they are better than ChatGPT. ChatGPT only had to answer the question of why it was better than <anything before AI> and for most people that was obvious.
It’s extremely easy to write a library that makes switching between models trivial. I could add OpenAI support. It would be just slightly more complicated because I would have to have a separate set of API keys while now I can just use my AWS credentials.
Also of course latency would be theoretically worse since with hosting on AWS and using AWS for inference you stay within the internal network (yes I know to use VPC endpoints).
There is no moat around switching models unlike Ben says.
But, talk to any (or almost any) non-developer and you'll find they 1/ mostly only use ChatGPT, sometimes only know of ChatGPT and have never heard of any other solution, and 2/ in the rare case they did switch to something else, they don't want to go back, they're gone for good.
Each provider has a moat that is its number of daily users; and although it's a little annoying to admit, OpenAI has the biggest moat of them all.
I would think that Gemini (the model) will add profit to Google way before OpenAI ever becomes profitable as they leverage it within their business.
Why would I pay for openrouter.ai and add another dependency? If I’m just using Amazon Bedrock hosted models, I can just use the AWS SDK and change the request format slightly based on the model family and abstract that into my library.
I think the combination of AI overviews and a separate “AI mode” tab is good enough.
https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e...
https://docs.aws.amazon.com/code-library/latest/ug/python_3_...
Every model family has its own request format.
When I said it was “trivial” to write a library, I should have been more honest. “It’s trivial to point ChatGPT to the documentation and have it one shot creating a Python library for the models you want to support”.
That said, I don't believe oai's models consistently produce the best results.
maybe another way of saying the same thing is that there is still a lot of work to make eval tooling a lot better!
Theres too much entropy in the system. Context babysitting is our future.
It wasn't a huge lift, but there is some moat. And the results were worse than for GPT-5 which I suppose is not a surprise, it was always unlikely GPT-5 was wasting all those flops.
I’ve created a framework that lets me test the quality in automated way between prompt changes and models and I compare costs/speed/quality.
The only thing that requires humans to judge the qualify out of all those are RAG results.
One of Anthropics models did the best with image understanding with Amazon’s Nova Pro being slightly behind.
For my tests, I used a customer’s specific set of test data.
For RAG I forgot. But is much more subjective. I just gave the customer an ability to configure the model and modify the prompt so they could choose.
Google's revenue stream and structural advantages mean they can continue this forever and if another AI winter comes, they can chill because LLM-based AI isn't even their main product.
"Supported by ads from developer tool partners we’ve carefully chosen"
It's not trying to secretly insert tools into LLM output but directly present the product offering inside the agent area.
At one point, I speculate that Cursor will test this out as well, probably in a more covert way so that tool use paths get modified. Once the industry realizes tool-use-ads, then we're toast.
OpenAI's strategy is to eventually overtake search. I'd be curious for a chart of their progress over time. Without Google trying to distort the picture with Gemini benchmark results and usage stats which are tainted by sheer numbers from traditional search and their apps.
That's hardly an indication that actual "non-technical" consumers don't care, or that there is any sort of barrier to either using both apps or using whichever is better at the moment, or whichever is more helpful in generating the meme of the moment.
If it were actually true that OpenAI was "plenty good enough" for 99% of questions that people have, and that "there is no reason to switch" then OpenAI could just stop training new models, which is absurdly expensive. They aren't doing that, because they sensibly believe that having better models matters to consumers.
I would make a bet than if you asked 100 random people, only 10 would even know what Gemini is. I know amongst my friendship group who are all fairly technical, white-collar type educated workers, everyone uses ChatGPT, no one uses Anthropic or Gemini. I am the only one who uses all three.
The app downloads are meaningless honestly. As far as the consumer market and awareness goes OpenAI won, and I don't see anyone else getting close, which is why Anthropic is just doubling down on the coding/enterprise market.
You're looking at this backwards. Being able to push Gemini into your face on Gmail, Gdocs, Google Search, Android, Android TV, Android Auto and Pixel devices sure is: Annoying, disruptive and unfair. But market-wise., it sure is a strength, not a weakness.
Google are giving away a year of Gemini Pro to students, which has seen a big shift. The FT reported today[0] that Gemini new app downloads are almost catching up to ChatGPT
[0] https://www.ft.com/content/8881062d-ff4f-4454-8e9d-d992e8e2c...
Google’s increasing revenues and profits and even Apple hinting at they aren’t seeing decreased revenue from their affiliation with Google hints at people not replacing Google search with ChatGPT.
Besides end user chatbot use is just a small part of the revenue from LLMs.
Urgh. There we go, advertising as the panacea.
How about a decent product that people actually want to pay for?
Advertising is not easy and not automatic money. This seems to be written by a teenager unfamiliar with anything.
Overly confident, but poorly informed articles aren't commonly written by teenagers anymore, but by LLMs.
I think he’s wrong that OpenAI can win this by upping the revenue engine through ads or through building a consumer behavior moat.
At the end of the day these are chat bots. Nobody really cares about the url and the interface is simple. Google won search by having deeply superior search algorithms and capitalizing on user traffic data to improve and refine those algorithms. It didn’t win because of AdWords … it just got rich that way.
The AI market is an undifferentiated oligopoly (IMO) and the only way to win is by having better algos trained on more data that give better results. Google can win here. It is already winning on video and image generation.
I actually think OpenAI is (wrongly) following Ben’s exact advice — going to the edge and consumer interface through things like the acquisition of things like Jony Ives device company. This is a failing move and an area where Google can also easily win with Android. I agree with Ben that upping the revenue makes sense but they can’t do it at the cost of user experience. Too much at stake.
Stuck 20 years ago.
Absolutely. It's a shame Return of the Jedi ruined the arc with those silly teddy bears.
So it is in business. It is very, very, difficult for an incumbent terrified of losing to the upstart. The momentum is with the challenger. They are faced with nothing but opportunity. Panic sets in. And the people responsible for the original win at the incumbent have either left (been substituted? Stretching my analogy) or don’t have the energy for the new battle.
Google will lose to OpenAI because it is a huge bureaucratic monster that has enshittified its main product. It deserves to die.
Google will survive, but it will become a shadow of its former self. And it deserves to wither, because it has ruined its own product.
I think customer diversity correlates instead with resilience.
> More than anything, though, I believe in the market power and defensibility of 800 million users, which is why I think ChatGPT still has a meaningful moat.
It's 800M weekly active users according to ChatGPT. I keep hearing that once you segment paid and unpaid, daily ChatGPT users fall off dramatically (<10% for paid and far less for unpaid).
Customer diversity says nothing about current or future resilience.
2. Results are noticeably worse, much more prone to "cheating" outcomes like generating some logic then = true to all results so it always finishes regardless of conditions.