61 points by trickster_ 3 days ago|29 comments
pseudony 3 days ago
Relevant, I would definitely be sleeping uneasy if I was at “Open”AI.

Some insist that Chinese models are a few generations behind, how many probably depends more on patriotism rather than fact.

Those people typically also insist that Chinese models are just distillations and often neglect to see how many of these companies contribute to the theory of designing efficient and capable models. It is somehow thought that they will always trail US models.

Well. i would say look at recent history. China worked up the ladder of manufacturing from simple, bad stuff to highly complex things - exactly what westerners then claimed they’d never be able to. Then as that was conquered, westerners comforted themselves by insisting that China could copy, but trail-blazing would always still be our thing. Well, Baidu and Alibaba face scaling issues few western companies do and BYD seems to match Tesla or VW just fine.

I am unsure why anyone would think US models are destined to remain in the lead forever.

At “best”, I see a fragmented world where each major region (yes also Europe) will eventually have their own models - exactly because no one wants to give any competitive power a chokehold over their society. But beyond that, models will largely be so good that this “generation”/universal superiority idea becomes completely obsolete.

Yizahi 2 days ago
Thing is, China has the same problems as OAI. Just looks at these two startups, they are one of the first LLM corpos where we have some actual numbers from accounting and not BS from marketing department or Sam's xitter. The situation looks dire.

https://imgshare.cc/wzw6jzm5

maxglute 2 days ago
> China has the same problems as OAI

PRC pureplay AI only companies has same problems as openAI, that's not the same as huge tech companies like Baidu or Alibaba or Tencent (i.e. Google/Microsoft tier) who can afford to lose money on AI. And ultimately they are also not sinking 100s of billions in capex, they can't even if they tried due to sanctions. Their financial exposure is magnitude less, i.e. it matters if you're losing 500m a year vs 5 billion per year especially as systemic economic contagion risk - PRC and US bubble sizes as % of economy not the same.

glimshe 2 days ago
A few months ago we were hearing that it was game over because of Deepseek. Today it has a mind share close to zero on the developed world. Being 90% as good (which Deepseek isn't) doesn't cut it...

US models might not be "destined" to stay in the lead, but I see no reason to believe that won't at the moment.

c-fe 3 days ago
As a retail investor mostly invested into broad ETFs (All World), is there any way I can get short exposure to OpenAI? Being short Oracle/Nvidia/Microsoft?
Yizahi 2 days ago
Shorting OIA, or really any big company, is like trying to stop a train which is on fire by standing in front of it. Yes, it is on fire and won't last long, but it will still crush any small player trying to overpower whole corrupt system.
fauigerzigerk 2 days ago
You don't need to stand in front of a train to bet on a trainwreck.
piva00 2 days ago
But that isn't the analogy, is it?

Betting on the trainwreck is quite easy, you got nothing to lose in the analogy, while shorting companies will cost you something, most times a lot if the bet has the wrong timing.

fauigerzigerk 2 days ago
Betting usually has a cost.
piva00 2 days ago
A fixed one, not one that can suffer snowballing increase in case your bet is wrong.
fauigerzigerk 2 days ago
Not necessarily. Spread betting doesn't work like that for instance. And shorting a stock can be structured in a way that caps your losses as well. It's just a matter of cost vs potential gains.
fauigerzigerk 2 days ago
c-fe 2 days ago
Im not sure I like that market in particular, but probably polymarket is indeed the best one… assuming the market will resolve fairly
fauigerzigerk 2 days ago
I've never used polymarket, I just wanted to mention prediction markets as an option in general.

The particular bet I linked to is probably a bad idea though, because there is a causal link between OpenAI doing well and deciding to go public. So this is not the way to bet on it crashing and burning.

trickster_ 3 days ago
That's an excellent question. My fear is that it's going to be a little bit like putting a towel on a pool-bed on the Titanic...
c-fe 3 days ago
Exactly. I would prefer to remain invested as I dont want to time the market. But I would prefer if I could meaningfully reduce exposure to OpenAI and the consequences of their possible downfall.
helsinkiandrew 3 days ago
Not really that gives you much exposure:

If OpenAI is worth $5B, 4% of MSFT market Cap is Open AI.

ARK Venture Fund (ARKVX) holding is 7.2% of its total but also has xAI, Anthropic and lots of other AI

https://www.ark-funds.com/funds/arkvx#hold

OpenAI going bust might be a shock to shareprices of publicly traded companies like Oracle, CoreWeave, Softbank and the like

EDIT: obviously if OpenAI is worth $500B, not 5

trickster_ 3 days ago
Tracking the demise of OpenAI through the news cycle
NitpickLawyer 3 days ago
Keep in mind that the "news cycle" isn't of much use in this field. For 2025, almost all "mainstream" media was dead wrong in their takes. Remember the Deepseek r1 craze in feb25? Where nvda is dead, oai is dead and so on? Yeah... that went well. Remember all the "no more data" craze? Despite no actual researcher worth their salt saying it or even hinting at it? Remember the "hitting walls" rhetoric?

The media has been "social media'd", with everything being driven by algorithms, everything being about capturing attention at the cost of everything else. Negativity sells. FUD sells.

viraptor 3 days ago
Some of those weren't really wrong.

> Remember all the "no more data" craze? Despite no actual researcher worth their salt saying it or even hinting at it?

We ran out of fresh interesting data. A large chunk of training needs to generate its own now. Synthetic data training became a huge thing over the last year.

> Remember the "hitting walls" rhetoric?

Since then the basic training slowed down a lot and improvements are more in the agentic and thinking solutions, with lots more reinforcement training than in the past.

The fact we worked around those problems doesn't mean they weren't real. It's like people say Y2K wasn't a problem... ignoring all the work that went into preventing issues.

NitpickLawyer 3 days ago
> We ran out of fresh interesting data.

No, we didn't. Hassabis has been saying this for a while now, and Gemini3 is proof of that. The data is there, there are still plenty of untapped resources.

> Synthetic data training became a huge thing over the last year.

No, people "heard" about it over the last year. Synthetic data training has been a thing in model training for ~2 years already. L3 was post-trained on synthetic-only data, and was released in apr24. Research only was even earlier with the phi family of models. Again, if you're only reading the mainstream media you won't get an accurate picture of these things, as you'd get from actually working in this field, or even following good sources, read the key papers and so on.

> The fact we worked around those problems doesn't mean they weren't real.

The way the media (and some influencers in this space) have framed it over the last year is not accurate. I get that people don't trust CEOs (and for good reasons), but even amodei was saying there is no data problem in early interviews in 25.

9cb14c1ec0 3 days ago
No, they are not dead. However, they face incredible competition in a brutally commoditized product space.
keyle 3 days ago
AFAIK in some space they're still the best models on offer.
A_D_E_P_T 3 days ago
The way I see it, this was the case until a few months ago. Today, Opus 4.5 is just as good or better than 5.2 Pro at tackling hard questions and coding, Gemini beats the free models, and Kimi K2/K2.5 is the better writer/editor.
nsingh2 2 days ago
In my own testing these models sill have a different flavor to them

- Opus 4.5 for software development. Works faster, and tends to write cleaner code.

- GPT 5.2 xHigh for mathematical analysis, and analysis in general (e.g. code review, planning, double checks), it's very meticulous.

- Gemini 3.0 Pro for image understanding, though this one I haven't played around with much.

cromka 3 days ago
Not in my experience, Gemini proves much better for me now.
embedding-shape 3 days ago
Can you get Gemini to stop outputting code comments yet? Every single time I've tried it, I've been unable to get it to stop adding comments everywhere, even when explicitly prompting against it, seems like it's almost hardcoded in the model that code comments have to be added next to any code it writes.
tsoukase 2 days ago
It will be the first application of the 'curse of Open company' rule: any for-profit entity that has the name Open in it is destined to go bankrupt.
maxglute 2 days ago
Is OpenAI profitable yet?

Will it be in time to recoop capex.