seizethecheese 9 hours ago
> There’s an extremely hurtful narrative going around that my product, a revolutionary new technology that exists to scam the elderly and make you distrust anything you see online, is harmful to society

The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.

I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.

muvlon 8 hours ago
The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.

And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.

pixl97 7 hours ago
Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.

This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.

ethbr1 5 hours ago
> bad actors quickly realized this is a force multiplication factor for them

You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).

Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.

n8cpdx 7 hours ago
You’re way off base. It can also create sexually explicit pictures of men.
abustamam 3 hours ago
Not sure if you're being sarcastic, but women are disproportionately affected by this than men.
tim333 29 minutes ago
The original the article is spoofing is interviewers asking Huang about the narrative that:

>It's the jobs and employment. Nobody's going to be able to work again. It's God AI is going to solve every problem. It's we shouldn't have open source for XYZ... https://youtu.be/k-xtmISBCNE?t=1436

and he says a "end of the world narrative science fiction narrative" is hurtful.

taurath 8 hours ago
> I get that this is satire, but satire has to have some basis in truth.

Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.

ajkjk 8 hours ago
if you make a thing and the thing is going to be inevitably used for a purpose and you could do something about that use and you do not --- then yes, it exists for that purpose, and you are responsible for it being used in that way. you don't get to say "ah well who could have seen this inevitable thing happening? it's a shame nobody could do anything about it" when it was you that could have done something about it.
jychang 8 hours ago
Yeah. Example: stripper poles. Or hitachi magic wands.

Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.

pluralmonad 8 hours ago
I'm super confused what harms come from stripper poles and vibrators. I am prepared to accept that the joke might have gone right over my head.
ajkjk 3 hours ago
I don't get the jump either but it was certainly lateral enough to be amusing
blibble 7 hours ago
how many front rooms have you walked into that had a stripper pole?

(also: what city? for a friend...)

wizardforhire 8 hours ago
To be fair to the magic wands, thats why “massagers” were invented in the first place. [1] [2] [3]

[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...

[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...

[3] https://en.wikipedia.org/wiki/Female_hysteria

irishcoffee 3 hours ago
And I'll go out on a limb and say the first person to use a pole resembling a fire pole in the fireman vs stripper debate was probably the stripper!
anonymars 8 hours ago
> you...could have done something about it

What is it that isn't being done here, and who isn't doing it?

ajkjk 3 hours ago
In this case we're debating whether one of the purposes of AI is to scan the elderly. Probably 'purpose' is not quite the right word, but the point would be: it is not the purpose of AI to not scam the elderly (or it would explicitly prevent that).

(note: I do not actually know if it explicitly prevents that. But because I am very cynical about corporations, I'd tend to assume it doesn't.)

rgmerk 7 hours ago
My hypothesis: Generative AI is, in part, reaping the reaction that cryptocurrency sowed.
drzaiusx11 8 hours ago
Training a model on sound data from readily available public social network posts and targeting their followers (which on say fb would include family and is full of "olds") isn't a very far fetched use-case for AI. I've created audio models used as audiobook narrators where you can trivially make a "frantic/panicked" voice clip saying "help it's [grandson], I'm in jail and need bail. Send money to [scammer]"

If it's not happening yet, it will...

evandrofisico 7 hours ago
It is happening already, recently Brazilian woman living in Italy was scammed thinking she was having an online relationship with Brazilian tiktoker, the scammers created a fake profile and were sending her audio messages with the voice of said tiktoker cloned via AI. She sent the scammers a lot of money for the wedding but when she arrived in Brazil discovered the con.
bandrami 7 hours ago
It's already happening in India. Voicefakes are working unnervingly well and it's amplified by the fact that old people who had very little exposure to tech have basically been handed a smart phone that has control of their pension fund money in an app.
solid_fuel 8 hours ago
LLMs are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:

- advertising

- astroturfing

- other forms of botting

- scamming old people out of their money

simianwords 34 minutes ago
Extremely exaggerated comment. LLMs dont hallucinate that much. That doesn’t rule them out of any control loop.

I mean, I think you have not put much thought into your theory.

echelon 8 hours ago
It's easily doubled my productivity as an engineer.

As a filmmaker, my friends and I are getting more and more done as well:

https://www.youtube.com/watch?v=tAAiiKteM-U

https://www.youtube.com/watch?v=oqoCWdOwr2U

As long as humans are driving, I see AI as an exoskeleton for productivity:

https://github.com/storytold/artcraft (this is what I'm making)

It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.

I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

Apart from all the other madness in the world, this is the one thing that has been a dream come true.

As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.

There's financial capital and there's labor capital. AI is a force multiplier for labor capital.

navigate8310 8 hours ago
> I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.

blks 8 hours ago
So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.

echelon 8 hours ago
> So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.

I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.

Here's a really old example of what that looks like (the models are a lot better at this now) :

https://www.youtube.com/watch?v=QYVgNNJP6Vc

There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.

heliumtera 5 hours ago
there is probably more tools to achieve this level of productivity than real humans interested in consuming this goyslop
jacquesm 7 hours ago
As a rule real creativity blossoms under constraints, not under abundance.
echelon 6 hours ago
Trying to make a dent in the universe while we metabolize and oxidize our telomeres away is a constraint.

But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.

Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.

gllmariuty 8 hours ago
> AI is a force multiplier for labor capital

for an 2011 account that's a shockingly naive take

yes, AI is a labor capital multiplier. and the multiplicand is zero

hint: soon you'll be competing not with humans without AI, but with AIs using AIs

Terr_ 7 hours ago
Even if it's >1, it doesn't follow that it's good news for the "labor capitalist".

"OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!"

queenkjuul 7 hours ago
Genuine question: does the agent work for you if you didn't build it, train it, or host it?

It's ostensibly doing things you asked it, but in terms dictated by its owner.

blibble 7 hours ago
indeed

and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding

and you're even paying them to replace you

heliumtera 5 hours ago
always good to be in the pick and shovel biz
ajross 8 hours ago
> [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

True, but no more true than it is if you replace the antecedent with "people".

Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.

solid_fuel 8 hours ago
> True, but no more true than it is if you replace the antecedent with "people".

Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.

Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]

[0] https://arxiv.org/abs/2401.11817

TheOtherHobbes 8 hours ago
The suggestion that hallucinations are avoidable in humans is quite a bold claim.
CamperBob2 8 hours ago
What you (and the authors) call "hallucination," other people call "imagination."

Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.

blibble 7 hours ago
what I call it is "buggy garbage"

it's not a person, it doesn't hallucinate or have imagination

it's simply unreliable software, riddled with bugs

CamperBob2 4 hours ago
(Shrug) Perhaps other sites beckon.
fao_ 8 hours ago
> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.

ajross 8 hours ago
> We have numerous studies on why hallucinations are central to the architecture,

And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?

Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.

TheOtherHobbes 7 hours ago
It's a fine line. Humans don't always fuck shit up.

But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.

The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.

ryan_lane 8 hours ago
Scammers are using AI to copy the voice of children and grandchildren, and make calls urgently asking to send money. It's also being used to scam businesses out of money in similar ways (copying the voice of the CEO or CFO, urgently asking for money to be sent).

Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.

seizethecheese 8 hours ago
Not at all. I’m saying AI doesn’t exist to scam elderly, which is saying nothing about whether it’s dangerous in that respect.
only-one1701 8 hours ago
Perhaps you’ve heard that the purpose of a system is what it does?
the_snooze 8 hours ago
Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.
jacquesm 8 hours ago
That's because they were thinking about their stock options instead.
rcxdude 8 hours ago
This phrase almost always seems to be invoked to attribute purpose (and more specifically, intent and blame) to something based on outcomes, where it should be more considered as a way to stop thinking in terms of those things in the first place.
irjustin 8 hours ago
In broad strokes - disagree.

This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.

solid_fuel 8 hours ago
> Just because you can cook with a hammer doesn't make it its purpose.

If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.

If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.

pixl97 7 hours ago
I do mean this is a pretty piss poor example.

Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.

To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.

paulryanrogers 6 hours ago
Email volume is mostly robots fighting robots these days.

Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.

christianqchung 8 hours ago
Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.
jacquesm 8 hours ago
The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.
only-one1701 8 hours ago
I was going to reply to the post above but you said it perfectly.
NicuCalcea 7 hours ago
I can't think of many other reasons to create voice cloning AI, or deepfake AI (other than porn, of course).
rgmerk 7 hours ago
There are legitimate applications - fixing a tiny mistake in the dialogue in a movie in the edit suite, for instance.

Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.

wk_end 8 hours ago
No one - neither the author of the article nor anyone reading - believes that Sam Altman sat down at his desk one fine day in 2015 and said to himself, “Boy, it sure would be nice if there were a better way to scam the elderly…”
username223 7 hours ago
An no one believes that Sam Altman thinks of much more than adding to his own wealth and power. His first idea was a failing location data-harvesting app that got bought. Others have included biometric data-harvesting with a crypto spin, and this. If there's a throughline beyond manipulative scamming, I don't see it.
burnto 8 hours ago
Fair, but it’s an exaggerated statement that’s supposed to clue us into the tone of the piece with a chuckle. Maybe even a snicker or giggle! It’s not worth dissecting for accuracy.
criley2 8 hours ago
Sure, phones aren't directly doing the scamming, but they're supercharging the ability to do so.

Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.

Therefore, phones are bad?

This is of course before we talk about what criminals do with money, making money truly evil.

only-one1701 8 hours ago
Without phones, we couldn’t talk to people across great distances (oversimplification but you get it).

Without Generative AI, we couldn’t…?

simianwords 36 minutes ago
Whats the big deal in talking to people across great distances? We can live without it.
shepherdjerred 8 hours ago
Are you really implying that generative AI doesn't enable things that were not previously possible?
Larrikin 8 hours ago
It's actually a fair question. There are software projects I wouldn't have taken on without an LLM. Not because I couldn't make it. But because of the time needed to create it.

I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.

People have been making nude celebrity photos for decades now with just Photoshop.

Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.

simianwords 36 minutes ago
What did internet bring?
jamiek88 8 hours ago
Name some then! I initially scoffed too but I can only think of stuff LLM’s make easier not things that were impossible previously.
pixl97 7 hours ago
Isn't that the vast majority of products? By making things easier they change the scale it is accomplished at? Farming wasn't previously impossible before the tractor.

People seemingly have some very odd views on products when it comes to AI.

solid_fuel 8 hours ago
Can you name one thing generative AI enables that wasn't previously possible?
pixl97 7 hours ago
Can you name one thing a plow enables that wasn't previously possible?

This line of thinking is ridiculous.

Larrikin 2 hours ago
A plow enables you to till land you couldn't before with your bare hands.

The phone let's you talk to someone you couldn't before when shouting can't.

ChatGPT let's you...

Please complete the sentence without an analogy

simianwords 37 minutes ago
This conversation is naive and simplifies technologies into “does it achieve something you otherwise couldn’t”.

The answer is that chatgpt allows you to do things more efficiently than before. Efficiency doesn’t sound sexy but this is what adds up to higher prosperity.

Arguments like this can be used against internet. What does it allow you to do now that you couldn’t do before?

Answer might be “oh I don’t know, it allows me to search and index information, talk to friends”.

It doesn’t sound that sexy. You can still visit a library. You can still phone your friends. But the ease of doing so adds up and creates a whole ecosystem that brings so many things.

mcv 25 minutes ago
...generate piles of low quality content for almost free.

AI is fascinating technology with undoubtedly fantastic applications in the future, but LLMs mostly seem to be doing two things: provide a small speedup for high quality work, and provide a massive speedup to low quality work.

I don't think it's comparable to the plow or the phone in its impact on society, unless that impact will be drowning us in slop.

freejazz 7 hours ago
> were not previously possible?

How obtuse. The poster is saying they don't enable anything of value.

queenkjuul 7 hours ago
For the most part, it hasn't. What do you consider previously impossible, and how is it good for the world?
JumpCrisscross 8 hours ago
> Therefore, phones are bad?

Phones are utilities. AI companies are not.

mrnaught 7 hours ago
>> enabled by the internet, does the internet exist for this purpose? Of course not.

I think point article was trying to make: LLMs and new genAI tools helped the scammers scale their operations.

thefz 2 hours ago
> The scams were enabled by the internet, does the internet exist for this purpose? Of course not.

But did it accelerate the whole process? Hell yeah.

gosub100 8 hours ago
It doesn't exist for that express purpose, but the voice and video impersonation is definitely being used to scam elderly people.

Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.

JumpCrisscross 8 hours ago
> the voice and video impersonation is definitely being used to scam elderly people

And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.

awesome_dude 9 hours ago
I think that maybe the point isn't that the scams/distrust are "new" with the advent of AI, but "easier" and "more polished" than before.

The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)

techblueberry 6 hours ago
Porn was enabled by the internet’s but does the internet exist for this purpose?

Yes. Yes it does. That is the satire.

wat10000 7 hours ago
They're used for scams. Isn't that the basis in truth you're looking for in satire?

Before this we had "the internet is for porn." Same sort of exaggerated statement.

ryanobjc 8 hours ago
I mean... explain sora.
internet101010 7 hours ago
Revolutionizing cat memes
popalchemist 4 hours ago
While the employees of the companies that make AI may have noble, even humanity-redeeming/saving intentions, the billionaire class absolutely has bond-villain level intentions. The destruction of the middle class and the removal of all livable-wage jobs is absolutely part of the techno-feudalist playbook that Trump, Altman, Zuckerberg, etc are intentionally moving toward. I'd say that is a scam. They want to recreate the conditions of earlier society - an upper class (them, who own the entire means of production and can operate the entire machine without the need for peons' input) who does whatever they want because the lower class is incapable of opposing them.

If you aren't familiar, look into it.

gllmariuty 9 hours ago
article forgot to mention the usual "think about the water usage"
Retric 9 hours ago
What’s the point of attacking a straw man while ignoring the actual points being brought up?

The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.

seizethecheese 9 hours ago
It mentions ecological destruction, which I must say is way better than water usage, AI is a power hog after all.
rootnod3 9 hours ago
If it's the "usual reply", maybe it's because....I dunno...water is kinda important?
queenkjuul 7 hours ago
I'm also not convinced the HN refrain of "it's actually not that much water" is entirely true. I've seen conflicting reports from sources i generally trust, and it's no secret an all-GPU AI data center is more resource intensive than a general purpose data center.
vitajex 5 hours ago
> satire has to have some basis in truth

In order to be funny at least!

quantum_state 8 hours ago
Viewed from historical perspective, big tech is really reaping the benefits of the intellectual wealth accumulated over many thousands of years by humanity collectively. This should be recognized to find a better path forward.
mediaman 7 hours ago
How? They are all losing tens of billions of dollars on this, so far.

Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

There doesn't appear to be any moat.

This criticism seems very valid against advertising and social media, where strong network effects make dominant players ultra-wealthy and act like a tax, but the AI business looks terrible, and it appears that most benefits are going to accrue fairly broadly across the economy, not to a few tech titans.

NVIDIA is the one exception to that, since there is a big moat on their business, but not clear how long that will last either.

TheColorYellow 7 hours ago
I'm not so sure thats correct. The Labs seem to offer the best overall products in addition to the best models. And requirements for models are only going to get more complex and stringent going forward. So yes, open source will be able to keep up from a pure performance standpoint, but you can imagine a future state where only licensed models are able to be used in commercial settings and licensing will require compliance against limiting subversive use or similar (e.g. sexualization of minors, doesn't let you make a bomb etc.).

When the market shifts to a more compliance-relevant world, I think the Labs will have a monopoly on all of the research, ops, and production know-how required to deliver. That's not even considering if Agents truly take off (which will then place a premium on the servicing of those agents and agent environments rather than just the deployment).

There's a lot of assumptions in the above, and the timelines certainly vary, so its far from a sure thing - but the upside definitely seems there to me.

cj 6 hours ago
If that's the case, the winner will likely be cloud providers (AWS, GCP, Azure) who do compliance and enterprise very well.

If Open Source can keep up from a pure performance standpoint, any one of these cloud providers should be able to provide it as a managed service and make money that way.

Then OpenAI, Anthropic, etc end up becoming product companies. The winner is who has the most addictive AI product, not who has the most advanced model.

tru3_power 6 hours ago
What’s the purpose of licensing requiring though things though if someone could just use an open source model to do that anyway? If someone were going to do those things you mentioned why do it through some commercial enterprise tool? I can see maybe licensing requiring a certain level of hardening to prevent prompt injections, but ultimately it still really comes down to how much power you give the model in whatever context it’s operating in.
gizmodo59 7 hours ago
Nvda is not the only exception. Private big names are losing money but there are so many public companies seeing the time of their life. Power, materials, dram, storage to name a few. The demand is truly high.

What we can argue about is if AI is truly transforming lives of everyone, the answer is a no. There is a massive exaggeration of benefits. The value is not ZERO. It’s not 100. It’s somewhere in between.

CrossVR 6 hours ago
I believe that eventually the AI bubble will evolve in a simple scheme to corner the compute market. If no one can afford high-end hardware anymore then the companies who hoarded all the DRAM and GPUs can simply go rent seeking by selling the computer back to us at exorbitant prices.
mikestorrent 6 hours ago
The demand for memory is going to result in more factories and production. As long as demand is high, there's still money to be made in going wide to the consumer market with thinner margins.

What I predict is that we won't advance in memory technology on the consumer side as quickly. For instance, a huge number of basic consumer use cases would be totally fine on DDR3 for the next decade. Older equipment can produce this; so it has value, and we may see platforms come out with newer designs on older fabs.

Chiplets are a huge sign of growth in that direction - you end up with multiple components fabbed on different processes coming together inside one processor. That lets older equipment still have a long life and gives the final SoC assembler the ability to select from a wide range of components.

https://www.openchipletatlas.org/

digiown 6 hours ago
That makes no sense. If the bubble bursts, there will be a huge oversupply and the prices will fall. Unless all Micron, Samsung, Nvidia, AMD, etc all go bankrupt overnight, the prices won't go up when demand vanishes.
charcircuit 5 hours ago
There is a massive undersupply of compute right now for the current level of AI. The bubble bursting doesn't fix that.
charcircuit 6 hours ago
>losing tens of billions

They are investing 10s of billions.

bigstrat2003 5 hours ago
They are wasting tens of billions on something that has no business value currently, and may well never, just because of FOMO. That's not what I would call an investment.
charcircuit 4 hours ago
Many investments may lose money, but the EV here is positive due to the extreme utility that AI can and is bringing.
bandrami 6 hours ago
They are washing 10s of billions of dollars an an industry-wide attempt to keep the music playing
gruez 7 hours ago
>Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

What happens when the AI bubble is over and developers of open models doesn't want to incinerate money anymore? Foundation models aren't like curl or openssl. You can't have maintain it with a few engineer's free time.

compounding_it 6 hours ago
Training is really cheap compared to the basically free inference being handed out by openai Anthropic Google etc.

Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.

mikestorrent 5 hours ago
Not sure I totally follow. I'd love to better understand why companies are open sourcing models at all.
edoceo 6 hours ago
If the bubble is over all the built infrastructure would become cheaper to train on? So those open models would incenerate less? Maybe there is an increase of specialist models?

Like after dot-com the leftovers were cheap - for a time - and became valuable (again) later.

bandrami 6 hours ago
No, if the bubble ends the use of all that built infrastructure stops being subsidized by an industry-wide wampum system where money gets "invested" and "spent" by the same two parties.
edoceo 2 hours ago
I feel like that was happening for the fiber-backhaul in 1999. Just different players.
yowlingcat 7 hours ago
I agree with your point and it is to that point I disagree with GP. These open weight models which have ultimately been constructed from so many thousands of years of humanity are also now freely available to all of humanity. To me that is the real marvel and a true gift.
fHr 6 hours ago
The other side of the market:
ulfw 6 hours ago
It's turning out to be a commodity product. Commodity products are a race to the bottom on price. That's how this AI bubble will burst. The investments can't possibly show the ROIs envisioned.

As an LLM I use whatever is free/cheapest. Why pay for ChatGPT if Copilot comes with my office subscription? It does the same thing. If not I use Deepseek or Qwen and get very similar results.

Yes if you're a developer on Claude Code et al I get a point. But that's few people. The mass market is just using chat LLMs and those are nothing but a commodity. It's like jumping from Siri to Alexa to whatever the Google thing is called. There are differences but they're too small to be meaningful for the average user

simianwords 33 minutes ago
Why do you see it as zero sum? I don’t care if big tech is accumulating intellectual wealth. I’m getting good products.
derektank 7 hours ago
Isn’t the reason we have a public domain so that people aren’t in a perpetual debt to their intellectual forebears?
gruez 7 hours ago
Copyrights last a very long time. Moreover nothing says it has to be open. The recipe to coke is still secret.
bandrami 6 hours ago
The recipe to Coca Cola is not copyrighted (recipes in general can't be) but is protected by trade secret laws, which can notionally last forever.

The recipe also isn't that much of a secret, they read it on the air on a This American Life episode and the Coca Cola spokesperson kind of shrugged it off because you'd have to clone an entire industrial process to turn that recipe into a recognizable Coke.

edoceo 6 hours ago
daveguy 6 hours ago
The recipe of coke is not a copyright, it is a trade secret. Trade secrets can remain indefinitely if you can keep it secret. Copyrights are "open" by their nature.
gruez 6 hours ago
In the context of this discussion though, what makes you think openai can't keep theirs a trade secret?
daveguy 6 hours ago
I was agreeing it could last a very long time, even longer that copyright. But specifically because it is not copyright. But as an AI model, it just won't have value for very long. Models are dated within a 6 months and obsolete in 2 years. IP around development may last longer.
justarandomname 8 hours ago
yeah, but zero chance of that happening unfortunately.
pear01 7 hours ago
well practiced cynicism is boring.

imo there are actually too few answers for what a better path would even look like.

hard to move forward when you don't know where you want to go. answers in the negative are insufficient, as are those that offer little more than nostalgia.

smallmancontrov 7 hours ago
It's interesting that the prosperity maximum of both the United States and China happened at "market economy kept in line with a firm hand" even though we approached it from different directions (left and right respectively) and in the US case reversed course.

We could use another Roosevelt.

stemlord 7 hours ago
people have been pretty clear about a positive path forward

- big tech should pay for the data they extract and sell back to us

- startups should stop forcing ai features that no one wants down our throats

- the vanguard of ai should be open and accessible to all not locked in the cloud behind paywalls

FridayoLeary 7 hours ago
But op is frankly absurd. It sounds reasonable for about 1 second before you think about it. What sets tech apart from every other area of human innovation? And why limit it to that? What about mineral exploitation? Oil etc.

It's just not a well thought out comment. If we focus on the "better path forward", the entrance to which is only unlocked by the realisation that big techs achievements (and thus, profits) belong to humanity collectively... After we reach this enlightened state, what does op believe the first couple of things a traveller on this path is likely to encounter (beyond Big Techs money, which incidentally we take loads of already in the form of taxes, just maybe not enough)?

_DeadFred_ 6 hours ago
Tech is the most set apart area of innovation ever.

First you have tech's ability to scale. The ability to scale also has it creep new changes/behaviors into every aspect of our lives faster than any 'engine for change' could previous.

Tech also inherits, so you can treat it as legos using, what are we at, definitely tens, maybe hundreds of thousands of human years of work, of building blocks to build on top of. Imagine if you started every house with a hundred thousand human years of labor already completed instantly. No other domain in human history accumulates tens of millions of skilled human years annually and allows so much of that work to stack, copy, and propagate at relatively low cost.

And tech's speed of iteration is insane. You can try something, measure it, change it, and redeploy in hours. Unprecedented experimentation on a mass scale leading to quicker evolution.

It's so disingenuous to have tech valuations as high as they are based on these differentiations but at the same time say 'tech is just like everything from the past and must not be treated differently, and it must be assumed outcomes from it are just like historical outcomes'. No it is a completely different beast, and the differences are becoming more pronounced as the above 10Xs over and over.

greesil 7 hours ago
Well practiced criticism of cynicism is boring
relaxing 7 hours ago
What should?
mrwaffle 7 hours ago
Is this technically a form of retroactive mind rape? If so, at least we have the right oligarchic friends experienced in this running the big show. (Apologies if I just any broke rules here).
mrwaffle 6 hours ago
This seems to be a touchy subject for YC people with 500+ karma. Not a repudiation but an 'invisible hand' downvote to avoid a response or exposure of an opinion. My ancestors fought in the revolutionary war and like them, I'll die on this very subtle rolling hill of a question. I loved you all as brothers, this may be the end for mrwaffle.
FridayoLeary 7 hours ago
Sounds like you just want some of their money.
triceratops 7 hours ago
Yes, especially since they're talking about wiping about most or all white-collar jobs in our lifetimes. What's wrong with that?
FridayoLeary 7 hours ago
Why drag your dead ancestors into the debate?

On that note they say oil is dead dinosaurs, maybe have a word with Saudi Arabia...

dekhn 7 hours ago
Oil comes from algae (and other tiny marine organisms) not dinosaurs.
triceratops 7 hours ago
Was this reply intended for a different comment? Or do I need more sleep?
blactuary 7 hours ago
If they want to abandon noblesse oblige we can certainly go back to the old way of evening things out. Their choice
mackeye 6 hours ago
some would say their money is our money via the ltv :-)
Gene5ive 7 hours ago
Up Next: A McSweeney's article where McSweeney's takes the debates about it on Hacker News as seriously as Hacker News takes McSweeney's: way too much
selimthegrim 6 hours ago
This has the potential to be another /g/ ITT we HN now
jaybyrd 9 hours ago
guys were just trying to take jobs away from you.... please stop being mean to us - richest people on earth 2026
donkey_brains 7 hours ago
Today a manager at my work asked all his teams including mine “please write up a report on how many engineers from your teams we could replace with AI”.

Surprisingly, the answer he got was “none, because that’s not how AI works”.

Guess we’ll see if that registers…

MobiusHorizons 7 hours ago
I would love to have responded something like “only one: yours”

But in all seriousness, ai does a pretty good job at impersonating VPs. It’s confidently wrong and full of hope for the future.

consumer451 6 hours ago
I use various agentic dev tools all day long, mostly with Opus. The tools are very capable now, but when planning mid-complexity features, I find the time estimates hilarious.

Phase 1: 1-2 weeks

Phase 2: 1 week

Phase 3: 2 weeks

8 to 12 hours later, all the work is done and tested.

The funny part to me was that if I had an AI true believer boss, I would report those time estimates directly, and have a lot of time to do other stuff.

ziml77 5 hours ago
Human time estimates are bad, but the ones that AI gives are just absurd. I've seen them used from small things like planning interviews and short presentations, all the way up to large scale projects. In no case do they make any sense to me. But I think people end up trusting them because they look so confident and well planned due to how the AIs break things down.
whattheheckheck 6 hours ago
When youre the boss telling kids how to work what time esti.ates will you believe?

Tis the cycle

sublinear 7 hours ago
All of them because cost cutting is a red flag in business regardless of what year it is.
GolfPopper 9 hours ago
You forgot... "by stealing from artists and writers at scale".
jacquesm 9 hours ago
You forgot about 'open source contributors' and 'musicians'.
dylan604 7 hours ago
these two groups are used to having their stuff stolen way more than the groups GP listed, so in a way kind of appropriate to have been omitted.
soulofmischief 8 hours ago
As an open source contributor and musician who is not rich, I am pretty stoked about the engineering, scientific and mathematical advancements being made in my lifetime.

I have only become more creatively enabled when adopting these tools, and while I share the existential dread of becoming unemployable, I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

overgard 7 hours ago
Well, if you consider Maslow's hierarchy of needs, "creatively enabled" would be a luxury at the top of the pyramid with "self actualization". Luxuries don't matter if the things at the bottom of the pyramid aren't there -- i.e. you can't eat or put a shelter over your head. I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback. Not to mention it's bad business if nobody can afford to use AI because they're unemployed. (I'm not anti-AI, it's an interesting tool, but I think the way it's being developed is inviting a lot of danger for very marginal returns so far)
jacquesm 7 hours ago
> I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback.

That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.

> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.

In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.

soulofmischief 3 hours ago
We were going to have to reckon with these problems eventually as science and technology inevitably progressed. The problem is the world is plunged in chaos at the moment and being faced with a technology that has the potential to completely and rapidly transform society really isn't helping.

Hatred of the technology itself is misplaced, and it is difficult sometimes debating these topics because anti-AI folk conflate many issues at once and expect you to have answers for all of them as if everyone working in the field is on the same agenda. We can defend and highlight the positives of the technology without condoning the negatives.

jacquesm 2 hours ago
> Hatred of the technology itself is misplaced

I think hatred is the wrong word. Concern is probably a better one and there are many things that are technology and that it is perfectly ok to be concerned about. If you're not somewhat concerned about AI then probably you have not yet thought about the possible futures that can stem from this particular invention and not all of those are good. See also: Atomic bombs, the machine gun, and the invention of gunpowder, each of which I'm sure may have some kind of contrived positive angle but whose net contribution to the world we live in was not necessarily a positive one. And I can see quite a few ways in which AI could very well be worse than all of those combined (as well as some ways in which it could be better, but for that to be the case humanity would first have to grow up a lot).

soulofmischief 16 minutes ago
I'm extremely concerned about the implications. We are going to have to restructure a lot of things about society and the software we use.

And like anything else, it will be a tool in the elite's toolbox of oppression. But it will also be a tool in the hands of the people. Unless anti-AI sentiment gets compromised and redirected such that support for limiting access to capable generative models to the State and research facilities.

The hate I am referring to is often more ideological, about the usage of these models from a purity standpoint. That only bad engineers use them, or that their utility is completely overblown, etc. etc.

soulofmischief 3 hours ago
You can be poor and creative at the same time. Creativity is not a luxury. For many, including myself, it's a means of survival. Creating gives me purpose and connection to the world around me.

I grew up very poor and was homeless as a teenager and in my early 20s. I still studied and practiced engineering and machine learning then, I still made art, and I do it now. The fact that Big Tech is the new Big Oil is besides the point. Plenty of companies are using open training sets and producing open, permissively licensed models.

johnnyanmac 8 hours ago
> I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

I'm not really a fan of the "you criticize society yet you participate in it" argument.

>I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.

We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

Wyverald 7 hours ago
>> I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

> I'm not really a fan of the "you criticize society yet you participate in it" argument.

It seems to me that GP is merely recognizing the parts of technological advance that they do find enjoyable. That's rather far from the "I am very intelligent" comic you're referencing.

> The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

Just noting that GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.

johnnyanmac 7 hours ago
>GP is merely recognizing the parts of technological advance that they do find enjoyable.

Machine fabrication is nice. Machine fabrication from sweatshop children in another country is not enjoyable. That's the exact nuance missing from their comment.

>GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.

I'd hope we'd understand since 2024 that we're in an attention society, and this is a very common tactic used to disenfranchise people from engaging in action against what they find unfair. Enforcing a feeling of inevitability is but one of many methods.

Intentionally or not, language like this does impede the efforts.

soulofmischief 3 hours ago
> I'm not really a fan of the "you criticize society yet you participate in it" argument.

Me neither, and I didn't make such an argument.

> You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.

What does that have to do with my argument? What about my argument suggested ignorance of this fact? This is just another straw man.

> We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

What an incredible characterization. Nothing about my argument is "laying down", perhaps it seems that way because you do not share my ideals, but I fight for my ideals, I debate them in public as I do now, and that is the furthest thing from "laying down" and "not fighting myself". You seem to be projecting several assumptions about my politics and historical knowledge. Did you have a point to make or was this just a bunch of wanking?

nozzlegear 8 hours ago
As an open source maintainer, I'm not stoked and I feel pretty much the opposite way. I've only become more annoyed when trying to adopt these tools, and felt more creative and more enabled by reducing their usage and going back to writing code by hand the old fashioned way. AI's only been useful to me as a commit message writer and a rubber duck.

> I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

This seems overly optimistic, but also quite dystopian. I hope that society doesn't become as integrated with these shitty AIs as we are with other technologies.

soulofmischief 3 hours ago
There is a way for us to both get what we want out of software development without ideologically crusading against each other's ideals. We can each have these valid opinions about how generative technology personally integrates into our lives.

Of course, that might be less and less true about our work as time goes on. At some point in the future, hiring an engineer who refuses to use generative coding tools will be the equivalent of hiring someone today who refuses to use an IDE or even a tricked out emacs/vim and just programs everything in Notepad. That's cool if they enjoy it, but it's unproductive in an increasingly competitive industry.

nozzlegear 2 hours ago
Perhaps so, but again I find your vision of the future overly optimistic. Luckily I'm self employed and don't have to worry about AI usage quotas and "being unproductive" in an increasingly unproductive and non-deterministic industry.
blibble 8 hours ago
> I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies

I'd rather be dead than a cortex reaver[1]

(and I suspect as I'm not a billionaire, the billionare owned killbots will make sure of that)

[1]: https://www.youtube.com/watch?v=1egtkzqZ_XA

callc 8 hours ago
You can say the same thing as we invented the atomic bomb.

Cool science and engineering, no doubt.

Not paying any attention to societal effects is not cool.

Plus, presenting things as inevitabilities is just plain confidently trying to predict the future. Anyone can san “I understand one day this era will be history and X will have happened”. Nobody knows how the future will play out. Anyone who says they do is a liar. If they actually knew then go ahead and bet all your savings on it.

peyton 7 hours ago
I dunno, I take a more McLuhan-esque view. We’re not here to save the world every single time repeatedly.
soulofmischief 3 hours ago
I do say the same thing about the bomb. It was very cool science and engineering. I've studied many of the scientists behind the Manhattan Project, and the work that got us there.

That doesn't mean I also must condone our use of the bomb, or condone US imperialism. I recognize the inevitability of atomic science; unless you halt all scientific progress forever under threat of violence, it is inevitable that a society will have to reckon with atomic science and its implications. It's still fascinating, dude. It's literally physics, it's nature, it's humbling and awesome and fearsome and invaluable all at the same time.

> Not paying any attention to societal effects is not cool.

This fails to properly contextualize the historical facts. The Nazis and Soviets were also racing to create an atomic bomb, and the world was in a crisis. Again, this isn't ignorant of US imperialism before, during or after the war and creation of the bomb. But it's important to properly contextualize history.

> Plus, presenting things as inevitabilities is just plain confidently trying to predict the future.

That's like trying to admonish someone for watching the Wright Brothers continually iterate on aviation, witnessing prototype heavier-than-air aircraft flying, and suggesting that one day flight will be an inevitable part of society.

The steady march of automation is an inevitability my friend, it's a universal fact stemming from entropy, and it's a fallacy to assume that anything presented as an inevitability is automatically a bad prediction. You can make claims about the limits of technology, but even if today's frontier models stop improving, we've already crossed a threshold.

> Anyone who says they do is a liar.

That's like calling me a liar for claiming that the sun will rise tomorrow. You're right; maybe it won't! Of course, we will have much, much bigger problems at that point. But any rational person would take my bet.

TheDong 7 hours ago
You're saying "musicians" aren't "artists", and "open source contributors" aren't artists _or_ writers? Artists covers both of the groups you said.
jacquesm 7 hours ago
Yes, we're all artists. Good now?
malfist 9 hours ago
Techbros trying to replace wage theft as the largest $ crime in the US
Mars008 6 hours ago
The picture will be incomplete if we don't mention that those 'artists and writers' are using the results at scale.
tsunamifury 7 hours ago
Something something… great artist steal.
jaybyrd 9 hours ago
well if all the talent is stolen and put into our water destruction machine we can make significantly worse and more expensive versions of just giving the job to a wagey
logicprog 7 hours ago
goalieca 7 hours ago
That article was clearly AI generated. I read pages of it and still didn’t see any actual data. Just different phrasing’s of that claim.
logicprog 5 hours ago
What are you talking about? He goes into plenty of data, domain relevant definitions, specific cases, etc? Links to reliable sources for every numbers claim, of which there's several per paragraph, shows graphs, pictures, and does a lot of math (all of which I manually checked myself on paper as I went through). Also, the writing style is very much not ChatGPT-like, especially with all of the very honest corrections and edits he's added over time, which an AI slop purveyor wouldn't do.

The deep analysis starts at this section: https://andymasley.substack.com/p/the-ai-water-issue-is-fake...

You can't just dismiss anything you don't like as AI.

pesus 9 hours ago
On one hand, we're actively destroying society, but on the other, billionaires are getting richer! Why are you mad at us!?
Sharlin 7 hours ago
Sonething something for a brief moment we created a lot of value for the shareholders
Joel_Mckay 5 hours ago
With -$4.50 revenue per new customer, these gamblers are demonstrably creating an externalized debt for society when it inevitably implodes the market.

Some are projecting >35% drop in the entire index when reality hits the "magnificent" 7. Look at the price of Gold, corporate cash flows, and the US Bonds laggard performance. That isn't normal by any definition. =3

snowwrestler 7 hours ago
> As someone who desperately needs this technology to work out, I can honestly say it is the most essential tool ever created in all of human history.

For those having trouble finding the humor, it lies in the vast gulf between grand assertions that LLMs will fundamentally transform every aspect of human life, and plaintive requests to stop saying mean things about it.

As a contrast: truly successful products obviate complaints. Success speaks for itself. In TV, software, e-commerce, statins, ED pills, modern smartphones, social media, etc… winning products went into the black quickly and made their companies shitloads of money (profits). No need to adjust vibes, they could just flip everyone the bird from atop their mountains of cash. (Which can also be pretty funny.)

There are mountains of cash in LLMs today too, but so far they’re mostly on the investment side of the ledger. And industry-wide nervousness about that is pretty easy to discern. Like the loud guy with a nervous smile and a drop of sweat on his brow.

https://youtu.be/wni4_n-Cmj4

So much of the current discourse around AI is the tech-builders begging the rest of the world to find a commercially valuable application. Like the AgentForce commercials that have to stoop to showing Matthew McConaughey suffering the stupidest problems imaginable. Or the OpenAI CFO saying maybe they’ll make money by taking a cut of valuable things their customers come up with. “Maybe someone else will change the world with this, if you’ll all just chill out” is a funny thing to say repeatedly while also asking for $billions and regulatory forbearance.

datsci_est_2015 6 hours ago
> As a contrast: truly successful products obviate complaints. Success speaks for itself.

Makes me consider: Dotcom domains, Bitcoin, Blockchain, NFTs, the metaverse, generative AI…

Varying degrees of utility. But the common thread is people absolutely begging you to buy in, preying on FOMO.

twoodfin 6 hours ago
Or maybe McSweeney’s hasn’t been consistently funny for years and years?
snowwrestler 6 hours ago
McSweeney’s was never consistently funny. This is a good piece though.
i_love_retros 6 hours ago
Today I asked copilot agent a question about a selector in a cypress test and it requested to run a python command in my terminal.
gradus_ad 7 hours ago
Jensen needs to keep escalating the hype to keep the hoarding dynamics in play. Because that's what's selling GPU's. You can't look at voracious GPU demand as a real signal of AI app profitability or general demand. It's a function of global tech oligarchs with gargantuan cash hoards not wanting to be left behind. But hoarding dynamics are nonlinear through self reinforcment and the moment any hint of limitations of current gen AI crop up spend will collapse.
stego-tech 7 hours ago
Excellent satire, absolutely something I could see in The Onion or Hard Drive as an Op-Ed.
Brajeshwar 6 hours ago
We, humans, will read this and laugh, chuckle, but the AI Overloads will not understand that. This will be added to the training data and become a truth. But what if that is?
olivierestsage 6 hours ago
Powerful catharsis in this
hedayet 6 hours ago
The same people selling you AI today (AGI tomorrow) were the ones selling remote work yesterday. Then "mandated" everyone back to the office.

Oh, and most of them had a crypto bag too.

<sigh>

Joel_Mckay 5 hours ago
Most cons can't create actual value, and inevitably must continue to con to survive. It would be called recidivism if they went to prison. =3
kindawinda 6 hours ago
dumbass article
twochillin 7 hours ago
fully expected this to be about nadella
willturman 6 hours ago
It is.
lifetimerubyist 6 hours ago
Gotta go back to shoving these nerds into lockers.
akomtu 7 hours ago
AI is alien intelligence, really. If biotech created an unusual mold that responds to electric impulses the way LLMs do, we would rightfully declare that this mold has some sort of intelligence and for this reason it is, technically speaking, an alien lifeform. AI is just that intelligent mold, but based on transistors instead of organic cells. Needless to say, it's a bad idea to create a competing lifeform that's smarter than us, regardless of whatever flimsy benefits it might have.
20260126032624 6 hours ago
Hey, I just wanted to say, big fan of your work on vixra.org
Joel_Mckay 5 hours ago
LLM is not real AI, would take 75% of our galaxy energy to reach human level error rates, and is economically a fiction.... but it doesn't have to be "AGI" to cause real harm.

https://en.wikipedia.org/wiki/Competitive_exclusion_principl...

The damage is already clear =3

https://www.youtube.com/watch?v=TYNHYIX11Pc

https://www.youtube.com/watch?v=yftBiNu0ZNU

https://www.youtube.com/watch?v=t-8TDOFqkQA

vivzkestrel 5 hours ago
- can we please get an article like this dedicated to windows 11?
porkloin 9 hours ago
I hate LLMs as much as the next guy, but this was honestly just not very funny. Humor can be a great vehicle for criticism when it's done right, but this feels like clickbait-level lazy writing. I wouldn't criticize it anywhere else, but I have enjoyed reading a bunch of actually good writing from mcsweeney's over the years in the actual literary journal and on their website.
Froztnova 8 hours ago
It's that brand of humor that isn't really humor anymore because the person writing it is clearly positively seething behind the keyboard and considers the whole affair to be deadly serious.

I've never really been able to get into it either because it's sort of a paradox. If I agree, I feel bad enough about the actual issue that I'm not really in the mood to laugh, and if I disagree then I obviously won't like the joke anyways.

ares623 3 hours ago
You’re not the target audience then. It’s for those who can’t shake the feeling that something doesn’t feel quite right about the whole thing.
porkloin 8 hours ago
For me I guess I don't really see what it's adding. You can watch an actual video clip of Jensen begging people not to "bully" or say "hurtful" things about AI while wearing a stupid leather jacket. It's a million times funnier to watch him squirm in real life.

I find it unfunny for the same reason I don't find modern SNL intro bits about Trump funny. The source material is already insane to the point that it makes surface-level satire like this feel pointless.

madeofpalk 9 hours ago
I think you just don’t like McSweeney’s style.
jaybyrd 9 hours ago
i think its a little on the nose but overall def worth reading and funny enough for a chuckle in my opinion
heliumtera 9 hours ago
Agreed, it's almost non satire given how cynical it is. I loved it.
notepad0x90 4 hours ago
> . Yes, it’s expanding the surveillance state, and yes, it’s destroying the education system, and yes, it’s being trained on copyrighted work without permission, and yes, it’s being used to create lethal autonomous weapons systems that can identify, target, and kill without human input, but… I forget my point, but ultimately, I think you should embrace it.

What's your answer to this? How did it turn out for nuclear energy? If it wasn't for this sort of thinking we'd have nuclear power all over the world and climate issues would not have been as bad.

You should embrace it, because other countries will and yours will be left behind if you don't. That doesn't mean put up with "slop", but that also doesn't mean be hostile to anything labeled "AI" either. The tech is real, it is extremely valuable (I applaud your mental gymnastics if you think otherwise), but not as valuable as these CEOs want it to be or in the way they want to be.

On one hand you have clueless executives and randos trying to slap "AI" on everything and creating a mess. On the other extreme you have people who reject things just because it has auto-complete (LLMs :) ) as one of it's features. You're both wrong.

What Jensen Huang and other CEOs like Satya Nadella are saying about this mindless bandwagonning of "oh no, AI slop!!!" b.s. is true, but I think even they are too caught up in tech circles? Regular people don't to the most part feel this way, they only care about what the tool can do, not how it's doing it to the most part. But..people in tech largely influence how regular people are educated, informed,etc...

Look at the internet, how many "slop" sites were there early on? how much did it get dismissed because "all the internet is good for is <slop>"?

Forget everything else, just having an actual program.. that I can use for free/cheap.. on my computer.. that can do natural language processing well!!! that's insane!! Even in some of the sci-fi I've been rewatching in recent years, the "AI/Computer" in spaceships or whatever is nowhere near as good as chatgpt is today in terms of understanding what humans are saying.

I'm just calling for a bit of a perspective on things? Some are too close to things and looking under the hood too much, others are too far and looking at it from a distance. The AI stock valuation is of course ridiculous, as is the overhyped investments in this area, and the datacenter buildout madness. And like I said, there are tons of terrible attempts at using this tech (including windows copilot), but the extremes of hostility against AI I'm seeing is also concerning, and not because I care about this awesome tech (which I do), but you know.. the job market is rough and everything is already crappy.. I don't want to go through an AI market crash or whatever on top of other things, so I would really appreciate it on a personal level if the cause of any AI crash is meritocratic instead of hype and bandwagonning, that's all.

ares623 4 hours ago
I wasn’t around at the time to argue against nuclear energy.

I wasn’t old enough to argue against the internet. Plus to be fair to the ones who were, there was no prior tech that was anything like it to even make realistic guesses into what it would turn out to.

I wasn’t old enough to argue against social media and the surveillance it brought.

Now AI comes along. And I am old enough. And I am experienced enough in a similar space. And I have seen what similar technology have done and brought. And I have taken all that and my conscience and instinct tells me that AI is not a net good.

Previous generations have failed us. But we make do with the world we find ourselves born into.

I find it absurd that experienced engineers today look at AI and believe it will make their children’s lives better, when very recent history, history they themselves lived through, tells a very different story.

All so they can open 20 PRs per day for their employers.

irishcoffee 8 hours ago
It is highly amusing to me that the same ~2,000 people who have the most to gain from LLM success also largely control the media narratives and the vast majority of the global economy.

Someone coined a term for those of the general population who trust this small group of billionaires and defend their technology.

“Dumb fucks”

random_duck 9 hours ago
Is this a sign that us of the plebs are starting to grow discontent?
blibble 8 hours ago
it's certainly a change from the "inevitability" vomit the boosters were emitting this time last year
techblueberry 6 hours ago
Oh, I mean they’re still doing that too:

https://www.darioamodei.com/essay/the-adolescence-of-technol...

blibble 5 hours ago
oh dear

the whole thing reads as "it's going to be so powerful! give money now!"

heliumtera 9 hours ago
Starting? Society minus those who struggled with css is fully fatigued of AI.
theLegionWithin 9 hours ago
nice satire
lovich 7 hours ago
The Luddites weren’t anti technological progress, they were anti losing their job and entire way of life with an impolite “get fucked you fucking peasant” message to boot.

I wonder what name the tech bros will come up to call us for the same feeling nowadays.

yoyohello13 6 hours ago
They don’t need a new name. They just keep using Luddite.
kshri24 8 hours ago
> just use my evil technology

Ridiculous to say the technology, by itself, is evil somehow. It is not. It is just math at the end of the day. Yes you can question the moral/societal implications of said technology (if used in a negative way) but that does not make the technology itself evil.

For example, I hate vibe coding with a passion because it enables wrong usage (IMHO) of AI. I hate how easy it has become to scam people using AI. How easy it is to create disinformation with AI. Hate how violence/corruption etc could be enabled by using AI tools. Does not mean I hate the tech itself. The tech is really cool. You can use the tech for doing good as much as you can use it for destroying society (or at the very minimum enabling and spreading brainrot). You choose the path you want to tread.

Just do enough good that it dwarfs the evil uses of this awesome technology.

budududuroiu 7 hours ago
Well, at this moment, the evil things done with technology vastly surpass the good things done with technology.

Democratisation of tech has allowed for more good to happen, centralisation the opposite. AI is probably one of the most centralisation-happy tech we've had in ages.

pixl97 7 hours ago
Centralization of technology has been happening at a rapid pace, and is only a tiny bit the fault of technology itself.

Capitalism demands profits. Competition is bad for profits. Multiple factories are bad for profits. Multiple standards are bad for profits. Expensive workers are bad for profits.

mrnaught 7 hours ago
“Just do enough good...”, it is hard to define what is "good". This tech has many dimensions and second-order effects, yet all the tech giants claim it a “net positive” without understanding fully what is unfolding.
robinhoode 7 hours ago
If we lived in a sane society, AI would actually be used for good.

AI is literally trained on by humans, used by humans. If humans are doing awful things with it, then it's because humans are awful right now.

I strongly feel this is related to the rise of fascism and wealth inequality.

We need a great conflict like WW2 to release this tension.

wk_end 8 hours ago
> It is just math at the end of the day.

Not really - it's math, plus a bazillion jigabytes of data to train that math, plus system prompts to guide that math, plus data centers to do that math, plus nice user interfaces and APIs to interface with that math, plus...

Anyway, it's just kind of a meaninglessly reductive thing to say. What is the atom bomb? It's just physics at the end of the day. Physics can wreck havoc on the world; so can math.

johnnyanmac 8 hours ago
>Nothing either good nor bad but thinking makes it so - Shakespeare

That said, their thinking is that this can remove labor from their production, all while stealing works under the very copyright they setup. So I'd call that "evil" in every conventional sense.

>Just do enough good that it dwarfs the evil uses of this awesome technology.

The evil is in the root of the training, though. And sadly money is not coming from "good". I don't see any models focusing on ensuring it trains only on CC0/FOSS works, so it's hard to argue of any good uses with evil roots.

If they could do that at the bare minimum, maybe they can make the argument over "horses vs cars". As it is now, this is a car powered by stolen horses. (also I work in games, and generative AI is simply trash in quality right now).

pixl97 7 hours ago
Even this has little to do with AI and points right at the capitalist society that already exists. HN really doesn't like to talk about their golden child that let's money flow, but the concentration of wealth and IP by the super wealthy occurred before GenAI was a thing.

This also ignores the broken fucking copyright system that ensures once you create something you get many lifetimes of fucking off without having to work, so if genAI kills that I won't shed a tear.

trhway 9 hours ago
Was the article itself written by AI?
zahlman 8 hours ago
McSweeney's is a well known Internet satire site that has been in operation for decades; while there are multiple contributors, the style here seems fairly standard for the site, the author has a submission history going back to at least 2020 and I see no LLM cliches. Suspecting AI here makes about as much sense to me as suspecting it on an arbitrarily selected LWN article.
rednafi 8 hours ago
"Oh, it's another tool in your repertoire like Bash" doesn't garner billions of dollars in investment. So they have to address it as the next electricity or the internet, when in its current form, it's much closer to a crypto grift than it is to electricity.
gip 8 hours ago
> "immoral technofascist life"

Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.

tdb7893 8 hours ago
Is there a reason you seem to view conscience and confronting facts as seemingly opposed things? Also it seems to me like morality and conscience seem important to argue about, with facts just being part of that argument.
SpicyLemonZest 7 hours ago
I think that someone interested in discussing facts would not write the phrase "immoral technofascist life". If I took the discussion at face value, I might respond asking for examples of how e.g. Dario Amodei is a "technofascist", but I think we can agree that would be really obtuse of me.
tdb7893 6 hours ago
Haha, my experience is people making those sorts of pronouncements will argue literally anything so I definitely wouldn't assume they are uninterested in arguing facts. Though I agree though that arguing with some people is obtuse and you arguing with the original post seems one of those cases.

More my confusion is the person I was responding to complaining about people arguing morality, which seems incredibly important to discuss. Lack of facts obviously makes discussions bad but there's definitely not some dichotomy with discussing morality (at least not with the people I know. My issue has not nearly been as much with people arguing morality, which is often my more productive arguments, and more people with a fundamentally incompatible view on what the facts are).

socialcommenter 5 hours ago
It's much easier for someone who blurs the facts to keep a clear conscience because they don't have to acknowledge (to themselves) what they've done.

Someone who's clear-eyed about the facts is much more likely to have a guilty conscience/think someone's actions are unconscionable.

I don't mean to argue either side in this discussion, but both sides might be ignoring the facts here.

johnnyanmac 7 hours ago
> instead of confronting facts and reality.

okay, what are the "facts and reality" here? If you're just going to say "AI is here to stay", then you 1) aren't dealing with the core issues people bring up, and 2) aren't brining facts but defeatism. Where would be if we used that logic for, say, Flash?

mattgreenrocks 7 hours ago
It’s wild to me that we both see people like Jensen as great while also tolerating public whining of the sort in the linked article. Don’t get me wrong, there are people who are far worse! But why do we put up with a billionaire whining that people are critical of what they make? At that scale it is guaranteed to have haters. It’s just statistics, man.
daft_pink 8 hours ago
Maybe he shouldn’t have claimed if we could get in a moving vehicle with his ai driving no problem
Lerc 9 hours ago
Perhaps things would work out better if people didn't say mean things regardless of who it's about.

You can still criticise without being mean.

donkey_brains 7 hours ago
Woosh
thinkingtoilet 9 hours ago
Explain how to nicely criticize computer software that allows for the generation of sexually explicit images of children.
Lerc 8 hours ago
I'm not sure what you are wanting here, are you actually requiring me to be a bully to affect change?

I can certainly criticize specific things respectfully. If I prioritised demonstrating my moral superiority I could loudly make all sorts of disingenuous claims that won't make the world a better place.

I certainly do not think people should be making exploitative images in Photoshop or indeed any other software.

I do not think that I should be able choose which software those rules apply to based upon my own prejudice. I also do not think that being able to do bad things with something is sufficient to negate every good thing that can be done with it.

Countless people have been harmed by the influence of religious texts, I do not advocate for those to be banned, and I do not demand the vilification of people who follow those texts.

Even though I think some books can be harmful, I do not propose attacking people who make printing presses.

What exactly are you requiring here. Pitchforks and torches? Why AI and not the other software that can be used for the same purposes?

If you want robust regulation that can provide a means to protect people from how models are used then I am totally prepared (and have made submissions to that effect) to work towards that goal. Being antagonistic works against making things better. Crude generalisations convince no-one. I want the world to be better, I will work towards that. I just don't understand how anyone could believe vitriolic behaviour will result in anything good.

chasd00 7 hours ago
Photoshop has been around for a long time.
paodealho 6 hours ago
And canvases and paint have existed for even longer, but it needs someone skilled to make use of it.

Stable Diffusion enabled the average lazy depraved person to create these images with zero effort, and there's a lot of these people in the world apparently.

bigstrat2003 5 hours ago
So? At the end of the day, regardless of how skilled one has to be to use it, a tool is not considered morally responsible for how it is used. Nor is the maker of that tool considered morally responsible for how it is used, except in the rare case where the tool only has immoral uses. And that isn't the case here.