maciejzj 10 hours ago
I believe that the root cause for the "fatigue" and tension around AI is much more embedded in the societal context.

US (and the rest of the western world) citizens face real life problems in youth employment, social/political instability, unaffordable housing, internet addiction (yes, I believe that it is real problem that people spend 5 hours on their phones daily) and social atomisation. Meanwhile all resources are put in a rush into building technology that does not fundamentally improve people's well being. Advanced societies have had pretty good capabilities of doing writing, design, coding, searching information, etc. Now we are pouring all available resources, at any cost, to automate these processes even more. The costs of this operation are tremendous and it doesn't yield any results that improve everyday lives.

In 2020 there was a ton of UI/UX/graphics companies that could produce copious amount of visual content for the society while providing work to many people. Now we are about to automate this process and be able to generate infinite amount of graphics on demand. To what end? Were our capabilities to create graphics any kind of bottleneck before? I don't think so.

The stock market and tech leadership are completely decoupled from the problems that the majority of people faces. The real effect of AI at hand is to commoditise intellectual work that previously functioned well dispersed in society. This does not bring benefit to the majority of people.

gwbas1c 10 hours ago
IMO: We should be using AI for the common good, IE, lower the cost of living, increase our living standards, improve health, improve access to food and shelter, ect.

I still have yet to see any LLM that appears to do that. They all seem to allow me to have a conversation with data; IE, a better Google search, or a quick way to make clip art.

jimmaswell 9 hours ago
I can't express how disappointed I am in the societal backlash to AI. It used to rightfully be something we looked forward to. I've been fascinated by it for as long as I've known what a computer was, from watching CyberChase as a kid in the early 2000s to reading the Asimov books to making my own silly sentence-mixing chatbot with a cult following on IRC.

I never thought a computer would pass the turing test in our lifetime (my bot did by accident sometimes, which was always amusing). I spoke to an AI professor who's been at this since the 80s and he never thought a computer would pass the turing test in our lifetime. And for it to happen and the reaction to be anything short of thunderous applause betrays a society bankrupt of imagination, forward thinking, and wonder.

We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it, instead of it being the coolest thing in the fucking world which it actually is. We have chatbots that can do extraordinary feats of research and pattern-matching but all we can do is cluck over some idiot giving himself bromide poisoning. The future is here and it's absolutely amazing, and I'm tired of pretending it isn't. I can't wait for this "AI users DNI", "This video proudly made less efficiently than it could have been because I'm afraid of AI" social zietgeist to die off.

hunterpayne 9 hours ago
> instead of it being the coolest thing in the fucking world

Some people think a M-16 is the coolest thing in the world. Nobody thinks we should be handing them out to school children. The reaction is because most people think AI will compound our current problems. Look at video generation. Not only does it put a lot of people out of work, it also breaks the ability of people to post a video as proof of something. Now we have to try to determine if the very real looking video is from life or a neural net. That is very dangerous and the tech firms released it without any real thought or discussion as to the effect it would have. They make illegal arms dealers look thoughtful by comparison. You ignoring this (and other effects) is just childish.

soco 8 hours ago
Even the rationale is looking to me childish "coolest thing in the fuckin world" come on, we're adults here, adults who actually care about the world, and not to woo the world, but to keep it and pass it to our kids in the best shape. Or at least in a better shape, and I believe right now we're failing at it, we are failing our children by throwing at them a flaming molotov instead of a playing ball.
maciejzj 8 hours ago
I think that AI capabilities are now really impressive, nevertheless my point is not about it being “cool” it is, but rather what kind of society we are going to produce with it and how it impacts people’s lives.

> It used to rightfully be something we looked forward to

This is rather unimportant, but I would say that media has usually portrayed AI as a dangerous thing. Space oddysey, Terminator, Mass Effect, Her, Alien, Matrix, Ex Machina, you name it.

keyringlight 5 hours ago
Although it might be more broadly applied to 'computers', Asimov's stories around AI and tangents like robots often have the underlying message that it's not so much the technology, so much as how humans use, react and interpret what it's doing.
jbaber 8 hours ago
The AI in Her was dangerous rather than depressing?
thankyoufriend 6 hours ago
AI isn't solving the problems that our society needs to solve, and its potentially making some of them worse. If you can't understand why people feel that way by now, then you are out of touch with their struggle. Instead of being disappointed in your fellow humans who contain the same capacity for wonder as you do, perhaps you should wonder why you are so quick to dismiss them as luddites. BTW you might want to read more about those guys, they didn't actually hate technology just because it was progress. They hated the intentional disruption of their hard-earned stability in service of enriching the wealthy.
krackers 5 hours ago
>It used to rightfully be something we looked forward to

Science fiction has always been mixed. In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of. There are countless other stories where technology and AI is used as a tool to enrich some at the expense of others.

>We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it

I don't strongly hold one opinion or the other, but I think fundamentally the roots of people's backlash is that it is something that jeopardizes their livelihood. Not in some abstract "now the beauty and humanity of art is lost" sort of way, but much more concretely, in that because of LLM adoption (or at least hype), they are out of a job and cannot make money—which hurts their quality of life much more than the increase in quality of life from access to LLMs. Then those people see the "easy money" pouring into this bubble, and it would be hard not to get demoralized. You can claim that people just need to find a different job, but that's ignoring the reality that the over the past century the skill-floor has basically risen and the ladder pulled up; and perhaps even worse, trying to reach for that higher bar still results in one "treading water" without any commensurate growth in earnings.

Izkata 5 hours ago
> In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of.

The Star Trek computer doesn't even attempt to show AGI, and Commander Data is the exception, not the rule. Star Trek has largely been anti-AGI for its entire run, for a variety of reasons - dehumanization, unsafe/going unstable, etc.

krackers 4 hours ago
I think you're confusing AGI for ASI or sentience? The enterprise's computer clearly meets the definition for AGI, in that it can basically do any task the humans require of it (limited only by data, which humans need to go out and gather). Especially consider that it also runs the holodeck.

Unlike modern LLMs it also correctly handles uncertainty, stating when there is insufficient information. However they seem to have made a deliberate effort to restrict/limit the extent of its use for planning and command (no "long-running agentic tasks" in modern parlance), requiring human input/intervention in the loop. This is likely because as you mentioned there is a theme of "losing humanity when you entrust too much to the machine".

intended 18 minutes ago
> We let pearl-clutching loom smashers hijack…

“let” nothing.

There is push back and not being able to enjoin it effectively doesn’t invalidate it.

As a concrete example: Here on HN, there are always debates on what the hell people mean when they say LLMs helped them code.

I’ve seen it happen enough that I now have a boiler plate request for posters: Share your level of seniority, experience, domain familiarity, language familiarity, project result, along side how the LLM helped.

I am a nerd through and through, and still read copious amounts of science fiction on a weekly basis. I lack no wonder and love for tech.

To make that future, the jagged edges of AI output need to be mapped and tamed. That needs these kinds of precise conversations so that we have a shared reality to work on.

Doing that job badly, is the root cause of people talking past each other. Dismissing it as doomerism, is to essentially miss market and customer feedback.

archagon 3 hours ago
Is there a single AI corporation working for the public good? “Open”AI just shed the last vestiges of its non-profitdom, and every single AI CEO sounds like a deranged cultist.

Wake me up when we have the FSM equivalent for AI. What we have now is a whole lot of corporate wank.

ceroxylon 12 hours ago
> The friction isn’t just about quality—it’s about what the ubiquity of these tools signals.

Unless they are being ironic, using an AI accent with a statement like that for an article talking about the backlash to lazy AI use is an interesting choice.

It could have been human written (I have noticed that people that use them all the time start to talk like them), but the "its not just x — its y" format is the hallmark of mediocre articles being written / edited by AI.

ianferrel 12 hours ago
This kind of phrasing has been common in writing long before AI. There's a reason that AI picked it up—it's a natural human written speech pattern.
0_____0 12 hours ago
It's ad copy style. Humans have been writing like that for decades but it's not naturalistic construction.

Not sure who you talk to, but the 'It's Not Just X, It's Y' format doesn't show up in everyday speech (caveat, in my experience).

PlunderBunny 9 hours ago
Are you distinguishing between speech and writing? I'd agree on the former - that no-one talks that way.
cess11 12 hours ago
I find it kind of common, used as a riff off of patterns in advertising and post-politics.
lumost 12 hours ago
It’s not universal - but it’s a compelling rhetorical device /s

It just sounds like slop as it’s everywhere now. The pattern invites questions on the authenticity of the writer, and whether they’ve fallen victim to AI hallucinations and sycophant. I can quickly become offended when someone asks me to read their ChatGPT output without disclosing it was gpt output.

Now when AI learns how to use parallelism I will be forced to learn a new style of writing to maintain credibility with the reader /s

ratelimitsteve 12 hours ago
this. marketing speak appears much more frequently in online text, which is what AI is trained on, than it does in normal everyday human speech that AI isn't able to capture and train on en masse yet.
wavemode 12 hours ago
I love how you tried to intentionally demonstrate that it's a normal speech pattern, but then your own sentence didn't even match the speech pattern.

This AI speech pattern is not just an em dash—it's a trite and tonally awkward pairing of statements following the phrase "not just".

donmcronald 12 hours ago
I hate this. Writing skills used to be a way to show you're paying attention to detail and making an effort. Now everyone thinks I'm cheesing it out with AI.

I also have a tougher time judging the reliability of others because you can get grammatically perfect, well organized emails from people that are incompetent. AI has significantly increased the signal to noise ratio for me.

ben_w 12 hours ago
For some reason, my mind has gone to this in a few of the comments, not just yours:

  Ten Ways To Tell AI Listicles From Human Ones—You Won't Believe Number Seven
thfuran 12 hours ago
I think you mean it has decreased the SNR (by raising the noise floor).
watwut 12 hours ago
If you wrote like AI, you write badly. I mean it genuinely. AI writing is not good text. It is grammatically correct passable text.
wvbdmp 12 hours ago
Yeah, but the stuff people seem to obsess about are just bits of neat typography like dashes and rhetoric flourishes that should, or used to, signify good writing and worked for a reason. The AI just overuses them, it’s not that they’re bad per se. I suppose it’s a treadmill like anything else that gets too popular. We have to find something new to do the same thing (if possible!). And that sucks.
watwut 8 hours ago
People cant verbalize good and bad writing. Being able to see it and being able to diagnoze are two different things.

Fact is, AI writing is just bad. It checks all the elementary school writing boxes, but fails in a sense that it is a bad, overly verbose, just subtly but meaningfully incorrect text. People see that, cant put the issue into words and then look for other signs.

Yes, ai is bad in a way someone who learns some rules about writing produces bad texts. And when human writes the same way, it is still bad.

Capricorn2481 11 hours ago
You are correct. There's just a lot of societal pressure to know what good writing is, even amongst people who don't read outside of social media. They don't want to appear stupid, so they say dashes are "AI" because everybody does.
Capricorn2481 10 hours ago
Having an em dash is not "writing like AI." It's been around forever.

"The irony, of course, is that many of the people most convinced of the em dash’s inhumanity are least equipped to spot actual AI writing"

https://medium.com/microsoft-design/the-em-dash-conspiracy-h...

watwut 8 hours ago
Having em dash is also does NOT show skill in the "Writing skills used to be a way to show you're paying attention to detail and making an effort."

Em dash was never attention to detail or effort. It is a way to construct sentence when you dont know how.

NewsaHackO 12 hours ago
The thing is he used both the em dash and the "It's not just X it's Y" form in the same sentence.
Sevii 12 hours ago
It's not. Most people have never written anything using that format.
zetanor 12 hours ago
That's only because most people don't write.
bdangubic 12 hours ago
that is exactly right!
everdrive 12 hours ago
It's a very sad reflection that people can no longer reliably identify real vs. LLM-generated text.
jibal 12 hours ago
It's simply how literate people write.
0_____0 12 hours ago
You write like a late night kitchen gizmo ad?
dmpk2k 12 hours ago
I suppose there are worse things than my scribblings sounding like a late-night kitchen gizmo ad. :)
jasode 11 hours ago
>write like a late night kitchen gizmo ad?

I naturally wrote "it's not just X, it's Y" long before November 2022 ChatGPT. Probably because I picked up on it from many people.

It's a common rhetorical template of a parallel form where the "X" is re-stating the obvious surface-level thing and then adding the "Y" that's not as obvious.

E.g. examples of regular people writing that rhetorical device on HN for 15+ years that wasn't in the context of advertising gadgets:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

So AI-slop writes like that because a lot of us humans wrote like that and it copies the style. Today is the first time I've learned that the "It's not X, it's Y" really irritates many readers. Personally, I've always found it helpful when it reveals a "Y" that's non-obvious.

NewsaHackO 8 hours ago
1) None of those had an Em dash

2) Most of those, while they had the two statements, the statements were not in succession.

There are maybe 4 unique examples in the search over the past 15 years, which is why it is very telling when there is an explosion of the pattern seen today, and that is most likely due to LLMs.

happytoexplain 11 hours ago
It's how I've written my entire adult life - except I'm too lazy to type a real em-dash.
ghaff 11 hours ago
I use them often enough that I know the Mac shortcut more or less automatically.
mrob 12 hours ago
I'd give this the benefit of the doubt because the y section is more complex than I'd expect from AI. If it said "it's about the ubiquity of these tools", I'd agree it feels like AI slop, but "it's about what the ubiquity of these tools signals" has a deeper parse tree than I usually see in that negative parallelism structure.
tensor 12 hours ago
On the plus side, I guess we can thank AI for bringing back the humble em-dash.
ghaff 12 hours ago
The em-dash has been standard at jobs I had over the past 20 years. Not necessarily a fan of lack of separation on both sides of the punctuation but it's the normal style.
NewsaHackO 12 hours ago
>The em-dash has been standard at jobs I had over the past 20 years.

What does this statement even mean?

ghaff 12 hours ago
That we commonly used em-dashes as a mark to set off parenthetical information. Yes, you can also use parentheses and they're somewhat interchangeable.
NewsaHackO 12 hours ago
OK, makes sense. I thought you were implying there was a quota of em-dashes you had to use each quarter.
palmotea 11 hours ago
> On the plus side, I guess we can thank AI for bringing back the humble em-dash.

It was always there, and used. It was just typically restricted to pretty formal, polished writing (I should know, I have coworkers who fuss over em and en spaces). I bet if you looked, you'd find regular use of em-dashes in Newsweek articles, going back decades.

The things LLMs did was inject it into unsophisticated writing. It's really only an LLM tell if it's overused or used in an unexpected context (e.g. an 8th-grader's essay, an email message).

robocat 9 hours ago
I suspect em-dashes are particularly American.

I tend to insert space before and after on the very rare occasion I might use one . . . However I'm from the colonies and I've just learnt my preference is likely due to British influence.

ghaff 10 hours ago
I mostly just use a double hyphen in casual/lazy writing like emails (or HN comments :-)) but use an em-dash in anything more formal. En-dashes just seem pedantic and I don't really use them in general.
tensor 7 hours ago
Yes, there were always common in academic writing and such, but you rarely saw them in casual text.
ghaff 10 hours ago
I always found it a bit amusing how we have three kinds of dashes/hyphens while we have double quotes that serve 3 or 4 different purposes :-)
lumost 12 hours ago
I’m quickly becoming convinced that humans adapt to ai content and quickly find it boring. It’s the same effect as walking through the renaissance section of an art museum or watching 10 action movies. You quickly become accustomed to the flavor of the content and move on. With human generated content, the process and limitations can be interesting - but there is no such depth to ai content.

This is fine for topics that don’t need to be exciting, like back office automation, data analysis, programming etc. but leads me to believe most content made for human consumption will still need to be human generated.

I’ve ceased using ai for writing assistance beyond spell check/accuracy/and as an automated reviewer. The core prose has to be human written to not sound like slop.

gdulli 12 hours ago
The population has been handed a shortcut machine and will give in to taking the path of least resistance in their tasks. It may be ironic but it's not surprising to see it used here.
bgwalter 12 hours ago
It's an age old rhetorical construct, the Antithesis:

https://en.wikipedia.org/wiki/Antithesis

"AI" surely overuses it but this article didn't seem suspect to me. I agree that "AI" speak rubs off on heavy users though.

cmiles8 13 hours ago
Tech customers are massively AI hype fatigued at this point.

The tech isn’t going away, but a hard reset is overdue to bring things back down for a cold hard reality check. Article yesterday about MSFT slashing quotas on AI sales as customers aren’t buying is in line with this broader theme.

Morgan Stanley also quietly trying to offload its exposure to data center financing in a move that smells very summer of 2008-ish. CNBC now talks about the AI bubble multiple times a day. OpenAI looks incredibly vulnerable and financially over-extended.

I don’t want a hard bubble pop such that it nukes the tech ecosystem, but we’re reaching a breaking point.

mrtksn 12 hours ago
> AI hype fatigue

I think your wording is the correct wording, not the "AI fatigue" because I don't want to go to pre-AI era and I can't stand another "OMG It's over" tweet at the same time.

donmcronald 12 hours ago
Yeah. Hype fatigue is a good description. Every time I see them talking about having AI book flights and hotels I think about the digital assistants on phones. Didn't they promise us the same thing back then?

I won't believe any of the claims until I see them working (flawlessly).

donmcronald 12 hours ago
> I don’t want a hard bubble pop such that it nukes the tech ecosystem, but we’re reaching a breaking point.

Some days I wonder if we'd be better off or worse off if we had a complete collapse of technology. I think it'd be painful with a massive drop in standard of living, but we could still recover. I wonder if the same will be true in a couple more generations.

I think it's dangerous to treat younger generations like replaceable cogs. What happens when there's no one around that knows how the cogs are supposed to fit together?

Sevii 12 hours ago
The annoying part is that every tech company made an internal mandate for every team to stuff AI into every product. There are some great products that use AI (Claude Code, ChatGPT, Nano-banana, etc). But we simply haven't had time to come up with good ways of integrating AI into every software product. So instead every big tech company spent two years forcing AI into everything with minimal thought. Obviously people are not happy with this.
redserk 12 hours ago
Some AI was done tastefully. Apple Photos search comes to mind. I can search for objects across your photos, and it does a reasonable job finding what I want. It's an example of AI that's so well done, the end user doesn't even know it's there.

Now Microsoft pushing "Copilot" is the complete opposite. It's so badly integrated with any standard workflow, it's disruptive in the worst of ways.

MSFT_Edging 12 hours ago
Google's photo app had the searching for years. It seemed a lot more useful a few years ago when it was using dumber image recognition models. Now it returns fewer matches on the same search. I tend to search the same things a few times a year as a supplement to some story and it's been frustrating to see in real time, a picture that used to come up when searching "car" or "guitar" is missing and instead unrelated pictures returned.
redserk 11 hours ago
That's fair, but my point was that AI should be implemented in a way that's out-of-the-way but is still helpful to users.

I think LLMs are incredible, I think there's a lot of really good usecases where it can help promote recommendations and actions for a user to take. I don't think every user wants to have every app they touch into a Chatbot though.

jamincan 7 hours ago
It's not just that they are adding AI to every single product, it's being pushed on customers in incredibly intrusive and irritating ways that makes it seem as though they're desperate for their AI investments to pan out. If your AI productivity enhancement stuff is so amazing, shouldn't you be turning away customers at the door due to demand instead of brow beating me into finally signing up for it in submission?
bluefirebrand 12 hours ago
Yup. The tech giants surely know the correction is coming by now. They are just trying to milk it just a tiny bit longer before it all comes crashing down.

Keep your eyes out on the skies, I forecast executives in golden parachutes in the near future

cmiles8 12 hours ago
Yes. IPO talks suggests there will be rushed attempts to cash out before this all implodes, but all signs are pointing to that ship having sailed.

I don’t see any big AI company having a successful IPO anytime soon which is going to leave some folks stuck holding the financial equivalent of nuclear waste.

GuinansEyebrows 8 hours ago
hey, at least this time it'll be venture capitalists and not pensioners.
nullocator 7 hours ago
Aren't the VCs getting some of their money from pensions?
dreamcompiler 12 hours ago
First they extracted oil and water and gold from the ground and sold them back to us.

Then they extracted our privacy and sold it to advertisers.

Now with AI they're extracting our souls. Who do they expect to sell them to?

Lord-Jobo 11 hours ago
The current AI is a lot better at extracting my sanity than anything else.
soraminazuki 8 hours ago
Haha, you haven't even seen the start of it. You know how bad Google's robot "support agents" are? Well, brace yourself because billionaires want to replace every job out there with those useless robots. All the while robbing the general population of wealth, computer chips, electricity, drinking water, and who knows what else.
ben_w 10 hours ago
> Now with AI they're extracting our souls. Who do they expect to sell them to?

Expect? To work those souls to build the Dyson swarms.

Well, some of them, anyway. Zuckerberg clearly uses the word "superintelligence" as a buzzword, he doesn't buy that vison for how "super" it could be given what he says his users will do with it.

harvey9 12 hours ago
Joke is on them cos I already mortgaged mine.
yyyyuuuuiiiii 10 hours ago
> a Pew Research center survey found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is more likely to harm them than help them in the future, according to Pew.

Am I stupid or is this a stupid line that proves the antithesis of what they want? It went from 4 in 5 being negative to less than half?

What even is journalism now.

hunterpayne 5 hours ago
There was probably an I don't know category. And that category probably was most of the respondents in the first poll.
alienbaby 12 hours ago
Ai is a lot more than ai generated media.
jtf23 12 hours ago
that the labour theory of value is actually true will come as a shock to many
hunterpayne 5 hours ago
If that was the case, a newly discovered but untapped gold mine would be worthless.
tptacek 12 hours ago
Periodic reminder that Newsweek no longer exists. What you're reading is essentially an SEO play run by a religious cult that bought Newsweek's branding in a fire sale. A useful thing to do with any Newsweek story is to take a minute to look into the background of whoever the author of the story is.

Notably, this story is pitched as a "News Story", but it's not really that at all; it's an opinion piece with a couple of quotes from AI opponents. Frustratingly, not many people understand what "Newsweek" is today, so they're always going to be able to collect some quotes for whatever story they're running.

waon 8 hours ago
Periodic reminder that there are people seeking to derail discussions critical of AI and divert attention away from the actual substance of these issues.
scythe 12 hours ago
Is this still accurate? Wikipedia says that Newsweek was acquired by IBT Media (a front for a religious movement) in 2013 but returned to independent ownership under Dev Pragad and Jonathan Davis in 2018 following a criminal investigation into embezzlement. I was not able to confirm or reject any links still existing between Newsweek and its current owners and IBT Media.

It does appear that the new owners are very much leaning into a "new media" business model and the old journalistic staff is probably gone.

tptacek 12 hours ago
Dev Pragad was involved with the IBT ownership of Newsweek. The whole thing is a mess. Cards on the table, I'm throwing an elbow with the "religious cult" thing; the cult has not much to do with why you should be careful with Newsweek. Rather, it's that Newsweek as it exists today has nothing to do with Newsweek as people understand it. Whoever owns it, it's basically an actual clickbait farm now.
bgwalter 12 hours ago
It is from Oct. 3rd, otherwise the author could have integrated OpenAI's "code red" and the absence of successful training runs since 2024 as well.

The article accurately reflects opinions in YouTube comments and opinions of the population at large.

shevy-java 12 hours ago
The recent increase of hardware prices (the example I gave yesterday of the same RAM I purchased about 2 years ago for a cheap computer, suddenly costing 2.5x as much as it did ~2 years ago) changed my opinion completely. I was already skeptical of AI, but I could see a few possible use cases, such as generating images for use in free-to-play browser games, and so forth. But I also saw a lot of crap - fake-videos on youtube that just wastes my time. And now that the prices are going up, I have enough indeed.

The big tech bro AI mega-corporations need to pay us - aka mankind - for the damage they cause here. The AI bubble is already subsiding, we see that, despite Trump trying to protect the mafiosi here. They owe us billions now in damage. Microsoft also recently announced it will milk everyone by increasing the prices due to "new AI features in MS office". Granted, I don't use Microsoft products as such (I do have a computer running Win10 though, so my statement is not 100% correct; I just don't use a Microsoft paid-for office suite or any other milk-for-money service), but I think it is time to turn the odds.

These corporations should pay us, for the damage they are causing here in general. I no longer accept the AI mafia method, even less so as the prices of hardware went up because of this. This mafia owes us money.

ronsor 12 hours ago
Neither companies nor individuals owe you "damages" because they merely did something you don't like.
palmotea 11 hours ago
> Neither companies nor individuals owe you "damages" because they merely did something you don't like.

As a matter of present law, maybe not. Doesn't mean it has to stay that way.

And this thing isn't merely "[doing] something you don't like."

hunterpayne 5 hours ago
Your insipid post wasted some of all of our time. Does that mean you now owe all of us?
azemetre 10 hours ago
People need to realize that these companies only exist because of work that the public openly shared with society at no cost (transistors and the internet). All they want to do is extract as much wealth as possible before dying.

There should be regulations that tax big tech enough to pay out billions to support a public jobs programs toward open source development.

They're destroying the most precious thing in the known universe, our planet, to chase a fictional good.

It's insanity.

glengGlenR 10 hours ago
Look who is running corporations and benefiting from the stock market the most:

CEO that jacked up EpiPen prices? GenX

Insurance CEO that was shot? GenX

Musk and Thiel, Satya Nadella, Sundar? GenX

Senior leadership and C-suite types are predominantly GenX now. Boomer leadership is tied up in Wall Street.

Backlash is just re-iterating the need for Boomers and GenX to cede stewardship of politics and the economy.

Biology is self-selecting. People who won't be around to deal with the fallout of their choices have little reason to change course.

It's on the next generations to invert the obvious ageism-driven decision making to prefer and over-value the walking dead.

Edit: Elon agrees it's best if older generations move on https://www.businessinsider.com/elon-musk-believes-it-is-imp...

throwaway743 12 hours ago
A lot of this AI backlash feels less about the tech itself and more about people feeling economically exposed. When you think your job or livelihood is on thin ice, it is easier to direct that fear at AI than at the fact that our elected reps have not offered any real plan for how workers are supposed to survive the transition.

AI becomes a stand-in for a bigger problem. We keep arguing about models and chatbots, but the real issue is that the economic safety net has not been updated in decades. Until that changes, people will keep treating AI as the thing to be angry at instead of the system that leaves them vulnerable.

jandrewrogers 12 hours ago
A major factor in the backlash is that the AI is obnoxiously intrusive because companies are forcefully injecting it into everything. It pops up everywhere trying to be "helpful" when it is neither needed nor helpful. People often experience AI as an idiot constantly jabbering next to them while they are trying to get work done.

AI would be much more pleasant if it only showed up when summoned for a specific task.

donmcronald 12 hours ago
> A lot of this AI backlash feels less about the tech itself and more about people feeling economically exposed.

This is what it is for me. I can see the value in AI tech, but big tech has inserted themselves as unneeded middlemen in way too much of our lives. The cynic in me is convinced this is just another attempt at owning us.

That leaked memo from Zuckerberg about VR is a good example. He's looking at Google and Apple having near absolute control over their mobile users and wants to get an ecosystem like that for Facebook. There's nothing about building a good product or setting things up so users are in control. It's all about wanting to own an ecosystem with trapped users.

If they can, big tech will gate every interaction or transaction and I think they see AI as a way to do that at scale. Don't ask your neighbour how to change a tire on your car. Ask AI. And pay them for the "knowledge".

watwut 6 hours ago
What memo?
encyclopedism 12 hours ago
I've mentioned this elsewhere on HN yet it bears repeating:

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.

Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.

I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.

the_snooze 12 hours ago
Eh, it's way simpler than that. AI doesn't know when to STFU. When I write an email or document, I don't need modern-day Clippy constantly guessing (and second-guessing) my thoughts. I don't need an AI sparkle button plastered everywhere to summarize articles for me. It's infantilizing and reeks of desperation. If AI is a truly useful tool, then I'll integrate it into my workflow on my own terms and my own timeline.
lawlessone 12 hours ago
Part of this the behavior around it too from some users. Like that guy spamming FOSS projects on github with 13k LOC of code nobody asked for and then acting forwarding the criticism from people forced to review it to the Claude and copy pasting the response back to .

Triumphant Posts on linkedin from former seo/cryptoscam people telling everyone they'll be left behind if they don't adopt the latest flavor text/image generator.

All these resources being spent too on huge data centres for text generators when things like protein folding would be far more useful, billion dollar salaries for "AI Gurus" that are just throwing sh*t at the wall and hoping their particular mix of models and training works, while laying people off.

watwut 12 hours ago
The constant stream of exaggerated bragging "hahaha we will fire and replace you all" from AI companies is not helping.

This tech cycle does not even pretend to be "likable guys". They are framing themselves as sociopaths due to, well, being interested only in millionaires money.

Makes up bad optics.

gaigalas 12 hours ago
I think the anger towards AI is completely fabricated.

Where are the new luddites, really? I just don't see them. I see people talking about them, but they never actually show up.

My theory is that they don't actually exist. Their existence would legitimize AI, not bring it down, so AI people fantasize about this imaginary nemesis.

malfist 12 hours ago
I don't understand how you've read the comments on this page and still think that people that don't like ai are fictitious
gaigalas 11 hours ago
The internet dislikes a lot of things. It's meaningless.

A substack post is not anger, an HN comment is not breaking machines.

In IT specifically, people who dislike AI are simply not revolting. They're retiring or taking sabbaticals. They're not breaking machinery, they're waiting for the thing to crash and burn on its own.

malfist 11 hours ago
That's a no true Scotsman mixed with a strawman.

Anger is a lot of things besides intentional sabotage and insurrection

gaigalas 11 hours ago
Do you believe people actually hate fidget spinners, or it was just internet talk?

Show me manifestations of anger towards AI that actually happened outside of the internet. Some massive strike, some protest, something meaningful.

tim333 10 hours ago
gaigalas 10 hours ago
Is Paul McCartney an IT worker?
tim333 9 hours ago
No but it's anger that happened outside of the internet.
gaigalas 9 hours ago
Fair enough. I should have been more specific.

You do understand that the Luddite movement was a low class, mass worker movement, don't you?

It seems out of place to mention a big name artist as a Luddite, as if you don't understand what the word implies.

MSFT_Edging 12 hours ago
Data centers are better guarded than some government institutions. New luddites can't exactly go in smashing the servers.

The actual "new luddites" have been screaming on here for years complaining about losing their careers over immature tech for the sake of reducing labor costs.

gaigalas 11 hours ago
Where are the organized strikes? The protests on the front of tech companies?

It simply doesn't exist. Tech workers who dislike AI are more indifferent than angry.

tim333 10 hours ago
gaigalas 10 hours ago
Fair enough.

I was talking mainly about tech workers though (this website target audience), and I didn't made that distinction in the comment you replied to, but I did make it down the thread way before you replied.

b0rbb 9 hours ago
I think categorizing Tech Workers as a whole as being synonymous with this website's target audience isn't a correct assumption.

There's some social circles I frequent that are made up of folks that anyone here would qualify as a "Tech Worker" that - and I mean this without any exaggeration - abhor the community of commenters at HN. And I don't mean just folks outside of SV or other major tech hubs. There are people that very much believe the commenters here are the worst people in the industry.

And just to be clear, I'm not of that belief, but it's worth pointing out that the population of Tech Workers on HN isn't going to be indicative of Tech Workers as a whole.

Going back to the previous topic however; Those same people I'm referring to often have a complete overlap with those that are burned out by AI in any form (usage, discussion around it, being advertised to, being forced to use it).

And to some of their concerns, I genuinely empathize with them. That's probably best gone into via something like a blog post or anything else that lends itself to long form writing.

gaigalas 9 hours ago
> the population of Tech Workers on HN isn't going to be indicative of Tech Workers as a whole

That's why I'm asking for a real world example outside of the internet? It's all weird bubbles here.

Your comment actually strenghtens my critique.

tim333 7 hours ago
>...without any exaggeration - abhor the community of commenters at HN

How come? We seem a mostly harmless lot?

robocat 9 hours ago
When will the mimes strike...
Ekaros 10 hours ago
Maybe we have just moved from anger to apathy in general.
gaigalas 10 hours ago
Really, where was the anger? When did it happened?

I see trash talk. We trash talk microwaves, for example, and that doesn't mean we hate them.

I do believe this supposed anger is fabricated. Not conspiracy style fabricated, just fabricated.

It's just attractive for a technology still full of problems to have an enemy. You can blame stuff on this imaginary enemy. You tell yourself that the guy trash talking your new toy is not right, he's just angry because he's going to lose his job soon, and you sleep better.

I also believe some people buy the existence of that enemy. There idea that people are anxious about losing jobs was repeated ad nauseam, so it stuck. But there is no real world evidence of this anxiety to the levels that it is attributed to.

jaredcwhite 10 hours ago
You are not a serious person.
gaigalas 10 hours ago
Can you elaborate?
josefritzishere 10 hours ago
AI slop emulates humans. It's not alive. It can't think. This is what stochastic parrots sound like.
robocat 9 hours ago
I tried to get a sound effects generator to find out what "stochastic parrots" might sound like...
echelon 12 hours ago
We're too early.

This is AI's "dialup era" (pre-56k, maybe even the 2400 baud era).

We've got a bunch of models, but they don't fit into many products.

Companies and leadership were told to "adopt AI" and given crude tools with no instructions. Of course it failed.

Chat is an interesting UX, but it's primitive. We need better ways to connect domains, especially multi-dimensional ones.

Most products are "bolting on" AI. There are few products that really "get it". Adobe is one of the only companies I've seen with actually compelling AI + interface results, and even their experiments are just early demos [1-4]. (I've built open source versions of most of these.)

We're in for another 5 years of figuring this out. And we don't need monolithic AI models via APIs. We need access to the AI building blocks and sub networks so we can adapt and fine tune models to the actual control surfaces. That's when the real take off will happen.

[1] Relighting scenes: https://youtu.be/YqAAFX1XXY8?si=DG6ODYZXInb0Ckvc&t=211

[2] Image -> 3D editing: https://youtu.be/BLxFn_BFB5c?si=GJg12gU5gFU9ZpVc&t=185 (payoff is at 3:54)

[3] Image -> Gaussian -> Gaussian editing: https://youtu.be/z3lHAahgpRk?si=XwSouqEJUFhC44TP&t=285

[4] 3D -> image with semantic tags: https://youtu.be/z275i_6jDPc?si=2HaatjXOEk3lHeW-&t=443

edit: curious why I'm getting the flood of downvotes for saying we're too early. Care to offer a counter argument I can consider?

wavemode 12 hours ago
This is AI's Segway era. Perfectly functional device, but the early-2000s notion that it was going to become the primary mode of transportation was just an investor-fueled pipe dream.
ljlolel 12 hours ago
Just add a stick and sharing: the scooters are quite successful
wavemode 12 hours ago
I never said they weren't successful.
echelon 12 hours ago
AI is going to be bigger than Segway / personal mobility.

I think dialup is the appropriate analogy because the world was building WebVan-type companies before the technology was sufficiently wide spread to support the economics.

In this case, the technology is too concentrated and there aren't enough ways to adapt models to problems. The models are too big, too slow, not granular enough, etc. They aren't build on a per-problem domain basis, but rather a "one-size fits all" model.

AndrewKemendo 12 hours ago
It’s not a backlash on the AI it’s a backlash on the society that is utilizing it

Go and read all of the anti-AI articles and they will eventually boil down to something to the effect of:

“the problems we have are more foundational and fundamental and AI looks like a distraction”

However this is a directionless complaint that falls under the “complaining about technology“ trope

As a result there is no real coherent conversation about what AI is how do we define it what are people actually complaining about what are we rallying against because people are overwhelmingly utilizing it in every part of life

Capricorn2481 11 hours ago
Disagree. The people meaningfully controlling the direction of AI are the foundational problem. They are funding the politicians that control the system leaving society vulnerable. And we are well past the point of getting them out of power.

Our ability to affect change on them is about numbers. The backlash to how AI is forced on us is an easy rallying point because it's a widely experienced negative. If you own a computer or phone, chances are good AI has annoyed you at some point.

People hated Clippy. I don't think it would've been helpful to say "It's not Clippy you're mad at, but the societal foundation that enabled Clippy." That's not a good slogan.

AndrewKemendo 11 hours ago
So you’re agreeing? You just want the slogan version?

You’re doing exactly what I’m saying, which is being mad at the social structure.

inglor_cz 12 hours ago
At one side, people are unhappy about AI, at the other side, who of those same people will stop using ChatGPT to write their work e-mails and assignments for them.

It looks like the "car problem" in yet another form. Many people will agree that our cities have become too car-centric and that cars take way too much public space, but few will give up their own personal car.

lisper 12 hours ago
> who of those same people will stop using ChatGPT to write their work e-mails and assignments for them

Me. I never use AI to write content that I put my name to. I use AI in the same way that I use a search engine. In fact, that is pretty much what AI is -- a search engine on steroids.

inglor_cz 12 hours ago
Good. I can believe that a few people are principled enough, but principled people tend to be in a minority, regardless of the topic.

I am also a bit afraid of a future where the workload will be adjusted to heavy AI use, to the degree that a human working with his own head won't be able to satisfy the demands.

This happened around the 'car problem' too: how many jobs are in a walkable / bikeable distance now vs. 1925?

lisper 12 hours ago
I don't think AI is comparable to cars. The problem with cars is that they necessarily use the commons. The more roads you build, the less space you have for trains, parks, housing, etc. AI isn't like that. I can continue to think for myself and look for ways to add value as a human even if everyone around me is using AI. And if that fails, if I can't find a way to compete with AI, if AI really is capable of doing everything that I can do as well as I can do it, why would I not want to use it?
atmavatar 11 hours ago
> AI isn't like that.

Tell that to anyone who was hoping to upgrade their RAM or build a new system in the near future.

Tell that to anyone who's seen a noticeable spike in electricity prices.

Tell that to anyone who's seen their company employ layoffs and/or hiring freezes because management is convinced AI can replace a significant portion of their staff.

AI, like any new technology, is going to cost resources and growing pains during its adoption. The important question which we'll only really know years or decades from now is whether it is a net positive.

ronsor 12 hours ago
> how many jobs are in a walkable / bikeable distance now vs. 1925?

Probably the same amount. The only difference is that people are willing to commute farther for a job than someone would've in 1925.

inglor_cz 12 hours ago
Nope, we have a lot more sprawl. Look at the old maps of cities and compare them to the current ones.

In Ostrava, where I live, worker's colonies were located right next to the factories or mines, within walking distance, precisely to facilitate easy access. It came with a lot of other problems (pollution), but "commute" wasn't really a thing. Even streetcars were fairly expensive, and most people would think twice before paying the fare twice a day.

Nowadays, there are still industrial zones around, but they tend to be located 5-10 km from the residential areas, far too far to walk.

Even leaving industry aside, how many kids you know walk to school, because it is in a walking distance from them?

everdrive 12 hours ago
I never use AI to write an email, and if I ever found out a coworker was using AI to sent emails to me I would never read those emails. It would be a tacit admission that the coworker in question did not have anything worth actually reading.
QuercusMax 12 hours ago
I started at a new job a few months back and I got an obviously AI-written reply to my manager's "welcome" email from some contractor type person who got CC'd on it. Fortunately I don't have to interact with the bozo in question, but it was really offputting.
inglor_cz 12 hours ago
I am a fairly prolific writer, having published ten books since 2018 and averaging some three articles per week, all of that next to my programming work.

But I understood quite early that I am a fluke of nature and many other people, including smart ones, really struggle when putting their words on paper or into Word/LibreWriter. A cardiologist who saved my wife's life is one of them. He is a great surgeon, but calling his writing mediocre would be charitable.

Such people will resort to AI if only to save time and energy.

everdrive 12 hours ago
I want to hear their real words. People don't need to be perfect writers. I just want to know what they really think.
NewsaHackO 12 hours ago
The issue is that people say this, but still negatively judge people for making grammar/spelling mistakes. So, the practice will continue.
everdrive 12 hours ago
That might drift in the future. I've actually found myself leaving small errors in sometimes since it suggests that I actually wrote it. I don't use literal em-dashes -- but I often use the manual version and have been doing so much longer than mainstream LLMs have been around. I also use a lot of bulleted lists -- both of which imply LLM usage. I take my writing seriously, even when it's just an internet comment. The idea that people might think I wrote with an LLM would be insulting.

But further and to the point, spelling / grammar errors might be a boutique sign of authenticity, much like fake "hand-made" goods with intentional errors or aging added in the factory.

MSFT_Edging 11 hours ago
We've had spell check for decades, automatic grammar checking for at least a decade in most word processors.

None of this needs generative AI to pad out a half-baked idea.

NewsaHackO 8 hours ago
Unless you are using a proprietary, dedicated grammar checker, auto grammar check is far from perfect and will miss some subject-verb agreement errors, incorrect use of idioms, or choppy flow. Particularly in professional environments where you are being evaluated, this can tank an otherwise solid piece of written work. Even online in HN comments, people will poke fun at grammar, and (while I don't have objective evidence for this) I have noticed that posts with poor grammar or misspellings tend to have less engagement overall. In a perfect world, this wouldn't matter, but it's a huge driving factor for why people use LLMs to touch up their writing.
cess11 12 hours ago
I like your optimism.
inglor_cz 12 hours ago
If someone can express themselves well enough that their ideas are still clear, they are a competent writer.

Bad writing starts in the "wtf was that meant to say" territory, which can cause unnecessary conflicts or prolong an otherwise routine communication.

I don't like people using AI to communicate with other people either, but I understand where they come from.

everdrive 12 hours ago
And the LLM can parse out total garbage in and understand the intent of the writer? I know when I'm vague with an LLM I get junk or inappropriate output.
inglor_cz 12 hours ago
As an optimist I would say that it could be better at teasing out your intent from you in an interactive way, then producing something along those lines. People aren't ashamed to answer questions from AI.
edu 12 hours ago
I actually think that AI is a great use case for writing emails, starting from a draft or list of what you want to say and getting it polished to a professional tone. You need to prompt it correctly, review and iterate so it doesn’t become slop, but very useful.

OTOH, I’d never use it to write emails to friends and family, but then I don’t need to sound professional.

bryanlarsen 12 hours ago
I think you have a good point, but are getting a lot of pushback because of your example. Most AI-hostile people won't use ChatGPT directly but are still happy to use a lot of modern AI features/products such as speech-to-text, recommendation engines, translation services, et cetera.
inglor_cz 12 hours ago
This is a good correction, thank you.
graublau 12 hours ago
These new urban systems are simply a way to cram as many people into a small boxes as possible and make citizens culturally flex with their bicycle life and not just seem like a poor peasant. Few give up their personal car because of decades of entrainment. I just think for better or worse, North America is always going to come out with the most selfish (for better or worse) system.

It can be clean tech but we need it to be personal or else we feel like we are declining in standard of living. They don't struggle with these issues in Europe or Asia because Europe and Asia are fundamentally different societies. I don't really see any other way around this dilemma.

inglor_cz 12 hours ago
"They don't struggle with these issues in Europe or Asia because Europe and Asia are fundamentally different societies."

Which ones? I live in Europe and selfishness + feelings of decadence/downfall/"the good days are over forever" are absolutely rampant here.

red-iron-pine 10 hours ago
> but few will give up their own personal car.

blaming the individual instead of the system is a sign of shillbottery

i'd give up my car tomorrow if we had better rapid transit in these parts. And they're working on it, but it takes billions and decades. And I need to drop my kids off at school tomorrow...

eesmith 12 hours ago
Isn't that observation akin to the "We should improve society somewhat" ... "Yet you participate in society! Curious!" meme?

We know from Paris that systemic change is required - it isn't simply individual choice.

inglor_cz 12 hours ago
OK, what sort of systemic change you propose? Note that bans on anything digital are really hard to enforce without giving law enforcement draconian powers.
eesmith 10 hours ago
Yes, that's the meme. "We should improve society somewhat" doesn't mean the peasant has actionable proposals, only pointing out there's a problem.

My comment was instead highlighting how your analogy to the "car problem" might be right, in that where we see big shifts to reduce the car problem, like in Paris, it comes from systemic changes from a car-centric form to a diverse transit form, rather than an individual choice model.

My go-to these days is to heavily tax the rich, place a staggering tax on the the superrich, introduce meaningful UBI, put strict controls on housing rentals, etc.

Why waste time using ChatGPT to write work email slop when you don't need to work?

I presume the student is using ChatGPT for assignments in order to get the credentials (a degree) needed for a job - while companies off-load their training costs onto young people, who are then encourage to go into debt, resulting in a mild form of debt bondage.

Reduce the need for a job, so the students who go to college are more likely to be those who want the personal education, rather than credentialism.

But hey, I'm just a peasant programmer saying there are flaws, and we should do something about it. Talk to an actual expert, not me.

Those experts (I hear them on podcasts) will also say things like having strong consumer protection laws so people aren't forced to deal with AI (and human!) sludge.

agentultra 12 hours ago
That’s largely because the built environment is designed for cars and there are no sufficient alternatives.

When you design the built environment for humans people drive less and own fewer personal vehicles.

lisper 12 hours ago
It's much worse than "designed for cars." It's more like "not survivable without a car." It's the same with apps on my phone. I don't want to use them, but sometimes there simply is no alternative in today's world.

We may end up building a world where AI is similarly necessary. The AI companies would certainly like that. But at the moment we still have a choice. The more people exercise their agency now the more likely we are to retain that agency in the future.

gdulli 12 hours ago
The comment explicitly mentioned "cities". Of course rural and suburban areas don't make it practical to be without a car, but many people in cities could use public transportation but handwave it as beneath them or dangerous or unreliable. When in reality it works just fine. Car travel has its own tradeoffs that can be just as easily exaggerated.
inglor_cz 12 hours ago
I lived in Prague, whose center is medieval and the neighbourhoods around it pre-1900, and even though what you say is true (fewer people drove everywhere), the streets were still saturated to their capacity.

It seemed to me that regardless of the city, many people will drive until the point where traffic jams and parking become a nightmare, and only then consider the alternatives. This point of pain is much lower in old European cities that weren't built as car-centric and much higher in the US, but the pattern seems to repeat itself.

ljlolel 12 hours ago
Helsinki made a major push to reduce cars to get to Vision Zero and succeeded in no car fatalities in 2024. It’s now hard to get a taxi and you’re expected to walk / other transport it’s a little bit annoying but worth it
dentemple 12 hours ago
> At one side, people are unhappy about AI, at the other side, who of those same people will stop using ChatGPT to write their work e-mails and assignments for them.

As Newsweek points out*, the people most unhappy about AI are the ones who CAN'T use ChatGPT to write their work e-mails and assignments because they NO LONGER have access to those jobs. There are many of us who believe that the backlash against AI would never have gotten so strong if it hadn't come at the expense of the creators, the engineers, and the unskilled laborers first.

AI agents are the new scabs, and the people haven't been fooled into believing that AI will be an improvement in their lives.

---

*and goes deeper on in this article: https://www.newsweek.com/clanker-ai-slur-customer-service-jo...

inglor_cz 12 hours ago
Whenever you send an e-mail, you are a scab to a postman who once delivered paper letters. Does that make you feel bad?
1vuio0pswjnm7 12 hours ago
GET /x?fnid=80IzSA4TbptZtx0vqRTZsv&fnop=commconfirm HTTP/1.1 Host: news.ycombinator.com connection: close Connection: close
zkmon 12 hours ago
>> There is a lack of deep value

There is nothing called deep value. Stock market rises on speculation of other people's buying patterns, not company fundamentals.

Where are deep values? Politics? media? academia? human relations? business? What do you mean by deep values? We can't even look beyond one year ahead.

Modern human behavior is highly optimized, to bother only about immediate goals. The other day, I was reviewing a software architecture and asked the architect who the audience/consumer for this document is. She said it is the reviewers. I asked again hoping to identify the downstream process that uses this document, and got the same answer, a bit sternly this time.

cess11 12 hours ago
Perhaps they mean that it does not satisfy some deep need, perhaps as opposed to a shallow want or desire.