US (and the rest of the western world) citizens face real life problems in youth employment, social/political instability, unaffordable housing, internet addiction (yes, I believe that it is real problem that people spend 5 hours on their phones daily) and social atomisation. Meanwhile all resources are put in a rush into building technology that does not fundamentally improve people's well being. Advanced societies have had pretty good capabilities of doing writing, design, coding, searching information, etc. Now we are pouring all available resources, at any cost, to automate these processes even more. The costs of this operation are tremendous and it doesn't yield any results that improve everyday lives.
In 2020 there was a ton of UI/UX/graphics companies that could produce copious amount of visual content for the society while providing work to many people. Now we are about to automate this process and be able to generate infinite amount of graphics on demand. To what end? Were our capabilities to create graphics any kind of bottleneck before? I don't think so.
The stock market and tech leadership are completely decoupled from the problems that the majority of people faces. The real effect of AI at hand is to commoditise intellectual work that previously functioned well dispersed in society. This does not bring benefit to the majority of people.
I still have yet to see any LLM that appears to do that. They all seem to allow me to have a conversation with data; IE, a better Google search, or a quick way to make clip art.
I never thought a computer would pass the turing test in our lifetime (my bot did by accident sometimes, which was always amusing). I spoke to an AI professor who's been at this since the 80s and he never thought a computer would pass the turing test in our lifetime. And for it to happen and the reaction to be anything short of thunderous applause betrays a society bankrupt of imagination, forward thinking, and wonder.
We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it, instead of it being the coolest thing in the fucking world which it actually is. We have chatbots that can do extraordinary feats of research and pattern-matching but all we can do is cluck over some idiot giving himself bromide poisoning. The future is here and it's absolutely amazing, and I'm tired of pretending it isn't. I can't wait for this "AI users DNI", "This video proudly made less efficiently than it could have been because I'm afraid of AI" social zietgeist to die off.
Some people think a M-16 is the coolest thing in the world. Nobody thinks we should be handing them out to school children. The reaction is because most people think AI will compound our current problems. Look at video generation. Not only does it put a lot of people out of work, it also breaks the ability of people to post a video as proof of something. Now we have to try to determine if the very real looking video is from life or a neural net. That is very dangerous and the tech firms released it without any real thought or discussion as to the effect it would have. They make illegal arms dealers look thoughtful by comparison. You ignoring this (and other effects) is just childish.
> It used to rightfully be something we looked forward to
This is rather unimportant, but I would say that media has usually portrayed AI as a dangerous thing. Space oddysey, Terminator, Mass Effect, Her, Alien, Matrix, Ex Machina, you name it.
Science fiction has always been mixed. In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of. There are countless other stories where technology and AI is used as a tool to enrich some at the expense of others.
>We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it
I don't strongly hold one opinion or the other, but I think fundamentally the roots of people's backlash is that it is something that jeopardizes their livelihood. Not in some abstract "now the beauty and humanity of art is lost" sort of way, but much more concretely, in that because of LLM adoption (or at least hype), they are out of a job and cannot make money—which hurts their quality of life much more than the increase in quality of life from access to LLMs. Then those people see the "easy money" pouring into this bubble, and it would be hard not to get demoralized. You can claim that people just need to find a different job, but that's ignoring the reality that the over the past century the skill-floor has basically risen and the ladder pulled up; and perhaps even worse, trying to reach for that higher bar still results in one "treading water" without any commensurate growth in earnings.
The Star Trek computer doesn't even attempt to show AGI, and Commander Data is the exception, not the rule. Star Trek has largely been anti-AGI for its entire run, for a variety of reasons - dehumanization, unsafe/going unstable, etc.
Unlike modern LLMs it also correctly handles uncertainty, stating when there is insufficient information. However they seem to have made a deliberate effort to restrict/limit the extent of its use for planning and command (no "long-running agentic tasks" in modern parlance), requiring human input/intervention in the loop. This is likely because as you mentioned there is a theme of "losing humanity when you entrust too much to the machine".
“let” nothing.
There is push back and not being able to enjoin it effectively doesn’t invalidate it.
As a concrete example: Here on HN, there are always debates on what the hell people mean when they say LLMs helped them code.
I’ve seen it happen enough that I now have a boiler plate request for posters: Share your level of seniority, experience, domain familiarity, language familiarity, project result, along side how the LLM helped.
I am a nerd through and through, and still read copious amounts of science fiction on a weekly basis. I lack no wonder and love for tech.
To make that future, the jagged edges of AI output need to be mapped and tamed. That needs these kinds of precise conversations so that we have a shared reality to work on.
Doing that job badly, is the root cause of people talking past each other. Dismissing it as doomerism, is to essentially miss market and customer feedback.
Wake me up when we have the FSM equivalent for AI. What we have now is a whole lot of corporate wank.
Unless they are being ironic, using an AI accent with a statement like that for an article talking about the backlash to lazy AI use is an interesting choice.
It could have been human written (I have noticed that people that use them all the time start to talk like them), but the "its not just x — its y" format is the hallmark of mediocre articles being written / edited by AI.
Not sure who you talk to, but the 'It's Not Just X, It's Y' format doesn't show up in everyday speech (caveat, in my experience).
It just sounds like slop as it’s everywhere now. The pattern invites questions on the authenticity of the writer, and whether they’ve fallen victim to AI hallucinations and sycophant. I can quickly become offended when someone asks me to read their ChatGPT output without disclosing it was gpt output.
Now when AI learns how to use parallelism I will be forced to learn a new style of writing to maintain credibility with the reader /s
This AI speech pattern is not just an em dash—it's a trite and tonally awkward pairing of statements following the phrase "not just".
I also have a tougher time judging the reliability of others because you can get grammatically perfect, well organized emails from people that are incompetent. AI has significantly increased the signal to noise ratio for me.
Ten Ways To Tell AI Listicles From Human Ones—You Won't Believe Number SevenFact is, AI writing is just bad. It checks all the elementary school writing boxes, but fails in a sense that it is a bad, overly verbose, just subtly but meaningfully incorrect text. People see that, cant put the issue into words and then look for other signs.
Yes, ai is bad in a way someone who learns some rules about writing produces bad texts. And when human writes the same way, it is still bad.
"The irony, of course, is that many of the people most convinced of the em dash’s inhumanity are least equipped to spot actual AI writing"
https://medium.com/microsoft-design/the-em-dash-conspiracy-h...
Em dash was never attention to detail or effort. It is a way to construct sentence when you dont know how.
I naturally wrote "it's not just X, it's Y" long before November 2022 ChatGPT. Probably because I picked up on it from many people.
It's a common rhetorical template of a parallel form where the "X" is re-stating the obvious surface-level thing and then adding the "Y" that's not as obvious.
E.g. examples of regular people writing that rhetorical device on HN for 15+ years that wasn't in the context of advertising gadgets:
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
So AI-slop writes like that because a lot of us humans wrote like that and it copies the style. Today is the first time I've learned that the "It's not X, it's Y" really irritates many readers. Personally, I've always found it helpful when it reveals a "Y" that's non-obvious.
2) Most of those, while they had the two statements, the statements were not in succession.
There are maybe 4 unique examples in the search over the past 15 years, which is why it is very telling when there is an explosion of the pattern seen today, and that is most likely due to LLMs.
What does this statement even mean?
It was always there, and used. It was just typically restricted to pretty formal, polished writing (I should know, I have coworkers who fuss over em and en spaces). I bet if you looked, you'd find regular use of em-dashes in Newsweek articles, going back decades.
The things LLMs did was inject it into unsophisticated writing. It's really only an LLM tell if it's overused or used in an unexpected context (e.g. an 8th-grader's essay, an email message).
I tend to insert space before and after on the very rare occasion I might use one . . . However I'm from the colonies and I've just learnt my preference is likely due to British influence.
This is fine for topics that don’t need to be exciting, like back office automation, data analysis, programming etc. but leads me to believe most content made for human consumption will still need to be human generated.
I’ve ceased using ai for writing assistance beyond spell check/accuracy/and as an automated reviewer. The core prose has to be human written to not sound like slop.
https://en.wikipedia.org/wiki/Antithesis
"AI" surely overuses it but this article didn't seem suspect to me. I agree that "AI" speak rubs off on heavy users though.
The tech isn’t going away, but a hard reset is overdue to bring things back down for a cold hard reality check. Article yesterday about MSFT slashing quotas on AI sales as customers aren’t buying is in line with this broader theme.
Morgan Stanley also quietly trying to offload its exposure to data center financing in a move that smells very summer of 2008-ish. CNBC now talks about the AI bubble multiple times a day. OpenAI looks incredibly vulnerable and financially over-extended.
I don’t want a hard bubble pop such that it nukes the tech ecosystem, but we’re reaching a breaking point.
I think your wording is the correct wording, not the "AI fatigue" because I don't want to go to pre-AI era and I can't stand another "OMG It's over" tweet at the same time.
I won't believe any of the claims until I see them working (flawlessly).
Some days I wonder if we'd be better off or worse off if we had a complete collapse of technology. I think it'd be painful with a massive drop in standard of living, but we could still recover. I wonder if the same will be true in a couple more generations.
I think it's dangerous to treat younger generations like replaceable cogs. What happens when there's no one around that knows how the cogs are supposed to fit together?
Now Microsoft pushing "Copilot" is the complete opposite. It's so badly integrated with any standard workflow, it's disruptive in the worst of ways.
I think LLMs are incredible, I think there's a lot of really good usecases where it can help promote recommendations and actions for a user to take. I don't think every user wants to have every app they touch into a Chatbot though.
Keep your eyes out on the skies, I forecast executives in golden parachutes in the near future
I don’t see any big AI company having a successful IPO anytime soon which is going to leave some folks stuck holding the financial equivalent of nuclear waste.
Then they extracted our privacy and sold it to advertisers.
Now with AI they're extracting our souls. Who do they expect to sell them to?
Expect? To work those souls to build the Dyson swarms.
Well, some of them, anyway. Zuckerberg clearly uses the word "superintelligence" as a buzzword, he doesn't buy that vison for how "super" it could be given what he says his users will do with it.
Am I stupid or is this a stupid line that proves the antithesis of what they want? It went from 4 in 5 being negative to less than half?
What even is journalism now.
Notably, this story is pitched as a "News Story", but it's not really that at all; it's an opinion piece with a couple of quotes from AI opponents. Frustratingly, not many people understand what "Newsweek" is today, so they're always going to be able to collect some quotes for whatever story they're running.
It does appear that the new owners are very much leaning into a "new media" business model and the old journalistic staff is probably gone.
The article accurately reflects opinions in YouTube comments and opinions of the population at large.
The big tech bro AI mega-corporations need to pay us - aka mankind - for the damage they cause here. The AI bubble is already subsiding, we see that, despite Trump trying to protect the mafiosi here. They owe us billions now in damage. Microsoft also recently announced it will milk everyone by increasing the prices due to "new AI features in MS office". Granted, I don't use Microsoft products as such (I do have a computer running Win10 though, so my statement is not 100% correct; I just don't use a Microsoft paid-for office suite or any other milk-for-money service), but I think it is time to turn the odds.
These corporations should pay us, for the damage they are causing here in general. I no longer accept the AI mafia method, even less so as the prices of hardware went up because of this. This mafia owes us money.
As a matter of present law, maybe not. Doesn't mean it has to stay that way.
And this thing isn't merely "[doing] something you don't like."
There should be regulations that tax big tech enough to pay out billions to support a public jobs programs toward open source development.
They're destroying the most precious thing in the known universe, our planet, to chase a fictional good.
It's insanity.
CEO that jacked up EpiPen prices? GenX
Insurance CEO that was shot? GenX
Musk and Thiel, Satya Nadella, Sundar? GenX
Senior leadership and C-suite types are predominantly GenX now. Boomer leadership is tied up in Wall Street.
Backlash is just re-iterating the need for Boomers and GenX to cede stewardship of politics and the economy.
Biology is self-selecting. People who won't be around to deal with the fallout of their choices have little reason to change course.
It's on the next generations to invert the obvious ageism-driven decision making to prefer and over-value the walking dead.
Edit: Elon agrees it's best if older generations move on https://www.businessinsider.com/elon-musk-believes-it-is-imp...
AI becomes a stand-in for a bigger problem. We keep arguing about models and chatbots, but the real issue is that the economic safety net has not been updated in decades. Until that changes, people will keep treating AI as the thing to be angry at instead of the system that leaves them vulnerable.
AI would be much more pleasant if it only showed up when summoned for a specific task.
This is what it is for me. I can see the value in AI tech, but big tech has inserted themselves as unneeded middlemen in way too much of our lives. The cynic in me is convinced this is just another attempt at owning us.
That leaked memo from Zuckerberg about VR is a good example. He's looking at Google and Apple having near absolute control over their mobile users and wants to get an ecosystem like that for Facebook. There's nothing about building a good product or setting things up so users are in control. It's all about wanting to own an ecosystem with trapped users.
If they can, big tech will gate every interaction or transaction and I think they see AI as a way to do that at scale. Don't ask your neighbour how to change a tire on your car. Ask AI. And pay them for the "knowledge".
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.
Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
Triumphant Posts on linkedin from former seo/cryptoscam people telling everyone they'll be left behind if they don't adopt the latest flavor text/image generator.
All these resources being spent too on huge data centres for text generators when things like protein folding would be far more useful, billion dollar salaries for "AI Gurus" that are just throwing sh*t at the wall and hoping their particular mix of models and training works, while laying people off.
This tech cycle does not even pretend to be "likable guys". They are framing themselves as sociopaths due to, well, being interested only in millionaires money.
Makes up bad optics.
Where are the new luddites, really? I just don't see them. I see people talking about them, but they never actually show up.
My theory is that they don't actually exist. Their existence would legitimize AI, not bring it down, so AI people fantasize about this imaginary nemesis.
A substack post is not anger, an HN comment is not breaking machines.
In IT specifically, people who dislike AI are simply not revolting. They're retiring or taking sabbaticals. They're not breaking machinery, they're waiting for the thing to crash and burn on its own.
Anger is a lot of things besides intentional sabotage and insurrection
Show me manifestations of anger towards AI that actually happened outside of the internet. Some massive strike, some protest, something meaningful.
You do understand that the Luddite movement was a low class, mass worker movement, don't you?
It seems out of place to mention a big name artist as a Luddite, as if you don't understand what the word implies.
The actual "new luddites" have been screaming on here for years complaining about losing their careers over immature tech for the sake of reducing labor costs.
It simply doesn't exist. Tech workers who dislike AI are more indifferent than angry.
I was talking mainly about tech workers though (this website target audience), and I didn't made that distinction in the comment you replied to, but I did make it down the thread way before you replied.
There's some social circles I frequent that are made up of folks that anyone here would qualify as a "Tech Worker" that - and I mean this without any exaggeration - abhor the community of commenters at HN. And I don't mean just folks outside of SV or other major tech hubs. There are people that very much believe the commenters here are the worst people in the industry.
And just to be clear, I'm not of that belief, but it's worth pointing out that the population of Tech Workers on HN isn't going to be indicative of Tech Workers as a whole.
Going back to the previous topic however; Those same people I'm referring to often have a complete overlap with those that are burned out by AI in any form (usage, discussion around it, being advertised to, being forced to use it).
And to some of their concerns, I genuinely empathize with them. That's probably best gone into via something like a blog post or anything else that lends itself to long form writing.
That's why I'm asking for a real world example outside of the internet? It's all weird bubbles here.
Your comment actually strenghtens my critique.
How come? We seem a mostly harmless lot?
I see trash talk. We trash talk microwaves, for example, and that doesn't mean we hate them.
I do believe this supposed anger is fabricated. Not conspiracy style fabricated, just fabricated.
It's just attractive for a technology still full of problems to have an enemy. You can blame stuff on this imaginary enemy. You tell yourself that the guy trash talking your new toy is not right, he's just angry because he's going to lose his job soon, and you sleep better.
I also believe some people buy the existence of that enemy. There idea that people are anxious about losing jobs was repeated ad nauseam, so it stuck. But there is no real world evidence of this anxiety to the levels that it is attributed to.
This is AI's "dialup era" (pre-56k, maybe even the 2400 baud era).
We've got a bunch of models, but they don't fit into many products.
Companies and leadership were told to "adopt AI" and given crude tools with no instructions. Of course it failed.
Chat is an interesting UX, but it's primitive. We need better ways to connect domains, especially multi-dimensional ones.
Most products are "bolting on" AI. There are few products that really "get it". Adobe is one of the only companies I've seen with actually compelling AI + interface results, and even their experiments are just early demos [1-4]. (I've built open source versions of most of these.)
We're in for another 5 years of figuring this out. And we don't need monolithic AI models via APIs. We need access to the AI building blocks and sub networks so we can adapt and fine tune models to the actual control surfaces. That's when the real take off will happen.
[1] Relighting scenes: https://youtu.be/YqAAFX1XXY8?si=DG6ODYZXInb0Ckvc&t=211
[2] Image -> 3D editing: https://youtu.be/BLxFn_BFB5c?si=GJg12gU5gFU9ZpVc&t=185 (payoff is at 3:54)
[3] Image -> Gaussian -> Gaussian editing: https://youtu.be/z3lHAahgpRk?si=XwSouqEJUFhC44TP&t=285
[4] 3D -> image with semantic tags: https://youtu.be/z275i_6jDPc?si=2HaatjXOEk3lHeW-&t=443
edit: curious why I'm getting the flood of downvotes for saying we're too early. Care to offer a counter argument I can consider?
I think dialup is the appropriate analogy because the world was building WebVan-type companies before the technology was sufficiently wide spread to support the economics.
In this case, the technology is too concentrated and there aren't enough ways to adapt models to problems. The models are too big, too slow, not granular enough, etc. They aren't build on a per-problem domain basis, but rather a "one-size fits all" model.
Go and read all of the anti-AI articles and they will eventually boil down to something to the effect of:
“the problems we have are more foundational and fundamental and AI looks like a distraction”
However this is a directionless complaint that falls under the “complaining about technology“ trope
As a result there is no real coherent conversation about what AI is how do we define it what are people actually complaining about what are we rallying against because people are overwhelmingly utilizing it in every part of life
Our ability to affect change on them is about numbers. The backlash to how AI is forced on us is an easy rallying point because it's a widely experienced negative. If you own a computer or phone, chances are good AI has annoyed you at some point.
People hated Clippy. I don't think it would've been helpful to say "It's not Clippy you're mad at, but the societal foundation that enabled Clippy." That's not a good slogan.
You’re doing exactly what I’m saying, which is being mad at the social structure.
It looks like the "car problem" in yet another form. Many people will agree that our cities have become too car-centric and that cars take way too much public space, but few will give up their own personal car.
Me. I never use AI to write content that I put my name to. I use AI in the same way that I use a search engine. In fact, that is pretty much what AI is -- a search engine on steroids.
I am also a bit afraid of a future where the workload will be adjusted to heavy AI use, to the degree that a human working with his own head won't be able to satisfy the demands.
This happened around the 'car problem' too: how many jobs are in a walkable / bikeable distance now vs. 1925?
Tell that to anyone who was hoping to upgrade their RAM or build a new system in the near future.
Tell that to anyone who's seen a noticeable spike in electricity prices.
Tell that to anyone who's seen their company employ layoffs and/or hiring freezes because management is convinced AI can replace a significant portion of their staff.
AI, like any new technology, is going to cost resources and growing pains during its adoption. The important question which we'll only really know years or decades from now is whether it is a net positive.
Probably the same amount. The only difference is that people are willing to commute farther for a job than someone would've in 1925.
In Ostrava, where I live, worker's colonies were located right next to the factories or mines, within walking distance, precisely to facilitate easy access. It came with a lot of other problems (pollution), but "commute" wasn't really a thing. Even streetcars were fairly expensive, and most people would think twice before paying the fare twice a day.
Nowadays, there are still industrial zones around, but they tend to be located 5-10 km from the residential areas, far too far to walk.
Even leaving industry aside, how many kids you know walk to school, because it is in a walking distance from them?
But I understood quite early that I am a fluke of nature and many other people, including smart ones, really struggle when putting their words on paper or into Word/LibreWriter. A cardiologist who saved my wife's life is one of them. He is a great surgeon, but calling his writing mediocre would be charitable.
Such people will resort to AI if only to save time and energy.
But further and to the point, spelling / grammar errors might be a boutique sign of authenticity, much like fake "hand-made" goods with intentional errors or aging added in the factory.
None of this needs generative AI to pad out a half-baked idea.
Bad writing starts in the "wtf was that meant to say" territory, which can cause unnecessary conflicts or prolong an otherwise routine communication.
I don't like people using AI to communicate with other people either, but I understand where they come from.
OTOH, I’d never use it to write emails to friends and family, but then I don’t need to sound professional.
It can be clean tech but we need it to be personal or else we feel like we are declining in standard of living. They don't struggle with these issues in Europe or Asia because Europe and Asia are fundamentally different societies. I don't really see any other way around this dilemma.
Which ones? I live in Europe and selfishness + feelings of decadence/downfall/"the good days are over forever" are absolutely rampant here.
blaming the individual instead of the system is a sign of shillbottery
i'd give up my car tomorrow if we had better rapid transit in these parts. And they're working on it, but it takes billions and decades. And I need to drop my kids off at school tomorrow...
We know from Paris that systemic change is required - it isn't simply individual choice.
My comment was instead highlighting how your analogy to the "car problem" might be right, in that where we see big shifts to reduce the car problem, like in Paris, it comes from systemic changes from a car-centric form to a diverse transit form, rather than an individual choice model.
My go-to these days is to heavily tax the rich, place a staggering tax on the the superrich, introduce meaningful UBI, put strict controls on housing rentals, etc.
Why waste time using ChatGPT to write work email slop when you don't need to work?
I presume the student is using ChatGPT for assignments in order to get the credentials (a degree) needed for a job - while companies off-load their training costs onto young people, who are then encourage to go into debt, resulting in a mild form of debt bondage.
Reduce the need for a job, so the students who go to college are more likely to be those who want the personal education, rather than credentialism.
But hey, I'm just a peasant programmer saying there are flaws, and we should do something about it. Talk to an actual expert, not me.
Those experts (I hear them on podcasts) will also say things like having strong consumer protection laws so people aren't forced to deal with AI (and human!) sludge.
When you design the built environment for humans people drive less and own fewer personal vehicles.
We may end up building a world where AI is similarly necessary. The AI companies would certainly like that. But at the moment we still have a choice. The more people exercise their agency now the more likely we are to retain that agency in the future.
It seemed to me that regardless of the city, many people will drive until the point where traffic jams and parking become a nightmare, and only then consider the alternatives. This point of pain is much lower in old European cities that weren't built as car-centric and much higher in the US, but the pattern seems to repeat itself.
As Newsweek points out*, the people most unhappy about AI are the ones who CAN'T use ChatGPT to write their work e-mails and assignments because they NO LONGER have access to those jobs. There are many of us who believe that the backlash against AI would never have gotten so strong if it hadn't come at the expense of the creators, the engineers, and the unskilled laborers first.
AI agents are the new scabs, and the people haven't been fooled into believing that AI will be an improvement in their lives.
---
*and goes deeper on in this article: https://www.newsweek.com/clanker-ai-slur-customer-service-jo...
There is nothing called deep value. Stock market rises on speculation of other people's buying patterns, not company fundamentals.
Where are deep values? Politics? media? academia? human relations? business? What do you mean by deep values? We can't even look beyond one year ahead.
Modern human behavior is highly optimized, to bother only about immediate goals. The other day, I was reviewing a software architecture and asked the architect who the audience/consumer for this document is. She said it is the reviewers. I asked again hoping to identify the downstream process that uses this document, and got the same answer, a bit sternly this time.