The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.
I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.
And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.
This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.
You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).
Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.
>It's the jobs and employment. Nobody's going to be able to work again. It's God AI is going to solve every problem. It's we shouldn't have open source for XYZ... https://youtu.be/k-xtmISBCNE?t=1436
and he says a "end of the world narrative science fiction narrative" is hurtful.
Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.
Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.
(also: what city? for a friend...)
[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...
[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...
What is it that isn't being done here, and who isn't doing it?
(note: I do not actually know if it explicitly prevents that. But because I am very cynical about corporations, I'd tend to assume it doesn't.)
If it's not happening yet, it will...
After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:
- advertising
- astroturfing
- other forms of botting
- scamming old people out of their money
I mean, I think you have not put much thought into your theory.
As a filmmaker, my friends and I are getting more and more done as well:
https://www.youtube.com/watch?v=tAAiiKteM-U
https://www.youtube.com/watch?v=oqoCWdOwr2U
As long as humans are driving, I see AI as an exoskeleton for productivity:
https://github.com/storytold/artcraft (this is what I'm making)
It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.
I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.
Apart from all the other madness in the world, this is the one thing that has been a dream come true.
As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.
There's financial capital and there's labor capital. AI is a force multiplier for labor capital.
While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.
Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.
There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.
I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.
Here's a really old example of what that looks like (the models are a lot better at this now) :
https://www.youtube.com/watch?v=QYVgNNJP6Vc
There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.
But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.
Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.
for an 2011 account that's a shockingly naive take
yes, AI is a labor capital multiplier. and the multiplicand is zero
hint: soon you'll be competing not with humans without AI, but with AIs using AIs
"OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!"
It's ostensibly doing things you asked it, but in terms dictated by its owner.
and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding
and you're even paying them to replace you
True, but no more true than it is if you replace the antecedent with "people".
Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.
History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.
Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.
Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]
Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.
it's not a person, it doesn't hallucinate or have imagination
it's simply unreliable software, riddled with bugs
It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.
And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?
Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.
But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.
The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.
Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.
This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.
If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.
If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.
Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.
To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.
Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.
Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.
Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.
Therefore, phones are bad?
This is of course before we talk about what criminals do with money, making money truly evil.
Without Generative AI, we couldn’t…?
I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.
People have been making nude celebrity photos for decades now with just Photoshop.
Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.
People seemingly have some very odd views on products when it comes to AI.
This line of thinking is ridiculous.
The phone let's you talk to someone you couldn't before when shouting can't.
ChatGPT let's you...
Please complete the sentence without an analogy
The answer is that chatgpt allows you to do things more efficiently than before. Efficiency doesn’t sound sexy but this is what adds up to higher prosperity.
Arguments like this can be used against internet. What does it allow you to do now that you couldn’t do before?
Answer might be “oh I don’t know, it allows me to search and index information, talk to friends”.
It doesn’t sound that sexy. You can still visit a library. You can still phone your friends. But the ease of doing so adds up and creates a whole ecosystem that brings so many things.
AI is fascinating technology with undoubtedly fantastic applications in the future, but LLMs mostly seem to be doing two things: provide a small speedup for high quality work, and provide a massive speedup to low quality work.
I don't think it's comparable to the plow or the phone in its impact on society, unless that impact will be drowning us in slop.
How obtuse. The poster is saying they don't enable anything of value.
Phones are utilities. AI companies are not.
I think point article was trying to make: LLMs and new genAI tools helped the scammers scale their operations.
But did it accelerate the whole process? Hell yeah.
Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.
And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.
The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)
Yes. Yes it does. That is the satire.
Before this we had "the internet is for porn." Same sort of exaggerated statement.
If you aren't familiar, look into it.
The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.
In order to be funny at least!
Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.
There doesn't appear to be any moat.
This criticism seems very valid against advertising and social media, where strong network effects make dominant players ultra-wealthy and act like a tax, but the AI business looks terrible, and it appears that most benefits are going to accrue fairly broadly across the economy, not to a few tech titans.
NVIDIA is the one exception to that, since there is a big moat on their business, but not clear how long that will last either.
When the market shifts to a more compliance-relevant world, I think the Labs will have a monopoly on all of the research, ops, and production know-how required to deliver. That's not even considering if Agents truly take off (which will then place a premium on the servicing of those agents and agent environments rather than just the deployment).
There's a lot of assumptions in the above, and the timelines certainly vary, so its far from a sure thing - but the upside definitely seems there to me.
If Open Source can keep up from a pure performance standpoint, any one of these cloud providers should be able to provide it as a managed service and make money that way.
Then OpenAI, Anthropic, etc end up becoming product companies. The winner is who has the most addictive AI product, not who has the most advanced model.
What we can argue about is if AI is truly transforming lives of everyone, the answer is a no. There is a massive exaggeration of benefits. The value is not ZERO. It’s not 100. It’s somewhere in between.
What I predict is that we won't advance in memory technology on the consumer side as quickly. For instance, a huge number of basic consumer use cases would be totally fine on DDR3 for the next decade. Older equipment can produce this; so it has value, and we may see platforms come out with newer designs on older fabs.
Chiplets are a huge sign of growth in that direction - you end up with multiple components fabbed on different processes coming together inside one processor. That lets older equipment still have a long life and gives the final SoC assembler the ability to select from a wide range of components.
They are investing 10s of billions.
What happens when the AI bubble is over and developers of open models doesn't want to incinerate money anymore? Foundation models aren't like curl or openssl. You can't have maintain it with a few engineer's free time.
Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.
Like after dot-com the leftovers were cheap - for a time - and became valuable (again) later.
As an LLM I use whatever is free/cheapest. Why pay for ChatGPT if Copilot comes with my office subscription? It does the same thing. If not I use Deepseek or Qwen and get very similar results.
Yes if you're a developer on Claude Code et al I get a point. But that's few people. The mass market is just using chat LLMs and those are nothing but a commodity. It's like jumping from Siri to Alexa to whatever the Google thing is called. There are differences but they're too small to be meaningful for the average user
The recipe also isn't that much of a secret, they read it on the air on a This American Life episode and the Coca Cola spokesperson kind of shrugged it off because you'd have to clone an entire industrial process to turn that recipe into a recognizable Coke.
https://www.reddit.com/r/CopyCatRecipes/comments/1qbbo6d/coc...
imo there are actually too few answers for what a better path would even look like.
hard to move forward when you don't know where you want to go. answers in the negative are insufficient, as are those that offer little more than nostalgia.
We could use another Roosevelt.
- big tech should pay for the data they extract and sell back to us
- startups should stop forcing ai features that no one wants down our throats
- the vanguard of ai should be open and accessible to all not locked in the cloud behind paywalls
It's just not a well thought out comment. If we focus on the "better path forward", the entrance to which is only unlocked by the realisation that big techs achievements (and thus, profits) belong to humanity collectively... After we reach this enlightened state, what does op believe the first couple of things a traveller on this path is likely to encounter (beyond Big Techs money, which incidentally we take loads of already in the form of taxes, just maybe not enough)?
First you have tech's ability to scale. The ability to scale also has it creep new changes/behaviors into every aspect of our lives faster than any 'engine for change' could previous.
Tech also inherits, so you can treat it as legos using, what are we at, definitely tens, maybe hundreds of thousands of human years of work, of building blocks to build on top of. Imagine if you started every house with a hundred thousand human years of labor already completed instantly. No other domain in human history accumulates tens of millions of skilled human years annually and allows so much of that work to stack, copy, and propagate at relatively low cost.
And tech's speed of iteration is insane. You can try something, measure it, change it, and redeploy in hours. Unprecedented experimentation on a mass scale leading to quicker evolution.
It's so disingenuous to have tech valuations as high as they are based on these differentiations but at the same time say 'tech is just like everything from the past and must not be treated differently, and it must be assumed outcomes from it are just like historical outcomes'. No it is a completely different beast, and the differences are becoming more pronounced as the above 10Xs over and over.
On that note they say oil is dead dinosaurs, maybe have a word with Saudi Arabia...
Surprisingly, the answer he got was “none, because that’s not how AI works”.
Guess we’ll see if that registers…
But in all seriousness, ai does a pretty good job at impersonating VPs. It’s confidently wrong and full of hope for the future.
Phase 1: 1-2 weeks
Phase 2: 1 week
Phase 3: 2 weeks
8 to 12 hours later, all the work is done and tested.
The funny part to me was that if I had an AI true believer boss, I would report those time estimates directly, and have a lot of time to do other stuff.
Tis the cycle
I have only become more creatively enabled when adopting these tools, and while I share the existential dread of becoming unemployable, I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.
I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.
> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.
In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.
Hatred of the technology itself is misplaced, and it is difficult sometimes debating these topics because anti-AI folk conflate many issues at once and expect you to have answers for all of them as if everyone working in the field is on the same agenda. We can defend and highlight the positives of the technology without condoning the negatives.
I think hatred is the wrong word. Concern is probably a better one and there are many things that are technology and that it is perfectly ok to be concerned about. If you're not somewhat concerned about AI then probably you have not yet thought about the possible futures that can stem from this particular invention and not all of those are good. See also: Atomic bombs, the machine gun, and the invention of gunpowder, each of which I'm sure may have some kind of contrived positive angle but whose net contribution to the world we live in was not necessarily a positive one. And I can see quite a few ways in which AI could very well be worse than all of those combined (as well as some ways in which it could be better, but for that to be the case humanity would first have to grow up a lot).
And like anything else, it will be a tool in the elite's toolbox of oppression. But it will also be a tool in the hands of the people. Unless anti-AI sentiment gets compromised and redirected such that support for limiting access to capable generative models to the State and research facilities.
The hate I am referring to is often more ideological, about the usage of these models from a purity standpoint. That only bad engineers use them, or that their utility is completely overblown, etc. etc.
I grew up very poor and was homeless as a teenager and in my early 20s. I still studied and practiced engineering and machine learning then, I still made art, and I do it now. The fact that Big Tech is the new Big Oil is besides the point. Plenty of companies are using open training sets and producing open, permissively licensed models.
I'm not really a fan of the "you criticize society yet you participate in it" argument.
>I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.
We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
> I'm not really a fan of the "you criticize society yet you participate in it" argument.
It seems to me that GP is merely recognizing the parts of technological advance that they do find enjoyable. That's rather far from the "I am very intelligent" comic you're referencing.
> The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
Just noting that GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.
Machine fabrication is nice. Machine fabrication from sweatshop children in another country is not enjoyable. That's the exact nuance missing from their comment.
>GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.
I'd hope we'd understand since 2024 that we're in an attention society, and this is a very common tactic used to disenfranchise people from engaging in action against what they find unfair. Enforcing a feeling of inevitability is but one of many methods.
Intentionally or not, language like this does impede the efforts.
Me neither, and I didn't make such an argument.
> You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.
What does that have to do with my argument? What about my argument suggested ignorance of this fact? This is just another straw man.
> We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
What an incredible characterization. Nothing about my argument is "laying down", perhaps it seems that way because you do not share my ideals, but I fight for my ideals, I debate them in public as I do now, and that is the furthest thing from "laying down" and "not fighting myself". You seem to be projecting several assumptions about my politics and historical knowledge. Did you have a point to make or was this just a bunch of wanking?
> I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
This seems overly optimistic, but also quite dystopian. I hope that society doesn't become as integrated with these shitty AIs as we are with other technologies.
Of course, that might be less and less true about our work as time goes on. At some point in the future, hiring an engineer who refuses to use generative coding tools will be the equivalent of hiring someone today who refuses to use an IDE or even a tricked out emacs/vim and just programs everything in Notepad. That's cool if they enjoy it, but it's unproductive in an increasingly competitive industry.
I'd rather be dead than a cortex reaver[1]
(and I suspect as I'm not a billionaire, the billionare owned killbots will make sure of that)
Cool science and engineering, no doubt.
Not paying any attention to societal effects is not cool.
Plus, presenting things as inevitabilities is just plain confidently trying to predict the future. Anyone can san “I understand one day this era will be history and X will have happened”. Nobody knows how the future will play out. Anyone who says they do is a liar. If they actually knew then go ahead and bet all your savings on it.
That doesn't mean I also must condone our use of the bomb, or condone US imperialism. I recognize the inevitability of atomic science; unless you halt all scientific progress forever under threat of violence, it is inevitable that a society will have to reckon with atomic science and its implications. It's still fascinating, dude. It's literally physics, it's nature, it's humbling and awesome and fearsome and invaluable all at the same time.
> Not paying any attention to societal effects is not cool.
This fails to properly contextualize the historical facts. The Nazis and Soviets were also racing to create an atomic bomb, and the world was in a crisis. Again, this isn't ignorant of US imperialism before, during or after the war and creation of the bomb. But it's important to properly contextualize history.
> Plus, presenting things as inevitabilities is just plain confidently trying to predict the future.
That's like trying to admonish someone for watching the Wright Brothers continually iterate on aviation, witnessing prototype heavier-than-air aircraft flying, and suggesting that one day flight will be an inevitable part of society.
The steady march of automation is an inevitability my friend, it's a universal fact stemming from entropy, and it's a fallacy to assume that anything presented as an inevitability is automatically a bad prediction. You can make claims about the limits of technology, but even if today's frontier models stop improving, we've already crossed a threshold.
> Anyone who says they do is a liar.
That's like calling me a liar for claiming that the sun will rise tomorrow. You're right; maybe it won't! Of course, we will have much, much bigger problems at that point. But any rational person would take my bet.
The deep analysis starts at this section: https://andymasley.substack.com/p/the-ai-water-issue-is-fake...
You can't just dismiss anything you don't like as AI.
Some are projecting >35% drop in the entire index when reality hits the "magnificent" 7. Look at the price of Gold, corporate cash flows, and the US Bonds laggard performance. That isn't normal by any definition. =3
For those having trouble finding the humor, it lies in the vast gulf between grand assertions that LLMs will fundamentally transform every aspect of human life, and plaintive requests to stop saying mean things about it.
As a contrast: truly successful products obviate complaints. Success speaks for itself. In TV, software, e-commerce, statins, ED pills, modern smartphones, social media, etc… winning products went into the black quickly and made their companies shitloads of money (profits). No need to adjust vibes, they could just flip everyone the bird from atop their mountains of cash. (Which can also be pretty funny.)
There are mountains of cash in LLMs today too, but so far they’re mostly on the investment side of the ledger. And industry-wide nervousness about that is pretty easy to discern. Like the loud guy with a nervous smile and a drop of sweat on his brow.
So much of the current discourse around AI is the tech-builders begging the rest of the world to find a commercially valuable application. Like the AgentForce commercials that have to stoop to showing Matthew McConaughey suffering the stupidest problems imaginable. Or the OpenAI CFO saying maybe they’ll make money by taking a cut of valuable things their customers come up with. “Maybe someone else will change the world with this, if you’ll all just chill out” is a funny thing to say repeatedly while also asking for $billions and regulatory forbearance.
Makes me consider: Dotcom domains, Bitcoin, Blockchain, NFTs, the metaverse, generative AI…
Varying degrees of utility. But the common thread is people absolutely begging you to buy in, preying on FOMO.
Oh, and most of them had a crypto bag too.
<sigh>
https://en.wikipedia.org/wiki/Competitive_exclusion_principl...
The damage is already clear =3
https://www.youtube.com/watch?v=TYNHYIX11Pc
I've never really been able to get into it either because it's sort of a paradox. If I agree, I feel bad enough about the actual issue that I'm not really in the mood to laugh, and if I disagree then I obviously won't like the joke anyways.
I find it unfunny for the same reason I don't find modern SNL intro bits about Trump funny. The source material is already insane to the point that it makes surface-level satire like this feel pointless.
What's your answer to this? How did it turn out for nuclear energy? If it wasn't for this sort of thinking we'd have nuclear power all over the world and climate issues would not have been as bad.
You should embrace it, because other countries will and yours will be left behind if you don't. That doesn't mean put up with "slop", but that also doesn't mean be hostile to anything labeled "AI" either. The tech is real, it is extremely valuable (I applaud your mental gymnastics if you think otherwise), but not as valuable as these CEOs want it to be or in the way they want to be.
On one hand you have clueless executives and randos trying to slap "AI" on everything and creating a mess. On the other extreme you have people who reject things just because it has auto-complete (LLMs :) ) as one of it's features. You're both wrong.
What Jensen Huang and other CEOs like Satya Nadella are saying about this mindless bandwagonning of "oh no, AI slop!!!" b.s. is true, but I think even they are too caught up in tech circles? Regular people don't to the most part feel this way, they only care about what the tool can do, not how it's doing it to the most part. But..people in tech largely influence how regular people are educated, informed,etc...
Look at the internet, how many "slop" sites were there early on? how much did it get dismissed because "all the internet is good for is <slop>"?
Forget everything else, just having an actual program.. that I can use for free/cheap.. on my computer.. that can do natural language processing well!!! that's insane!! Even in some of the sci-fi I've been rewatching in recent years, the "AI/Computer" in spaceships or whatever is nowhere near as good as chatgpt is today in terms of understanding what humans are saying.
I'm just calling for a bit of a perspective on things? Some are too close to things and looking under the hood too much, others are too far and looking at it from a distance. The AI stock valuation is of course ridiculous, as is the overhyped investments in this area, and the datacenter buildout madness. And like I said, there are tons of terrible attempts at using this tech (including windows copilot), but the extremes of hostility against AI I'm seeing is also concerning, and not because I care about this awesome tech (which I do), but you know.. the job market is rough and everything is already crappy.. I don't want to go through an AI market crash or whatever on top of other things, so I would really appreciate it on a personal level if the cause of any AI crash is meritocratic instead of hype and bandwagonning, that's all.
I wasn’t old enough to argue against the internet. Plus to be fair to the ones who were, there was no prior tech that was anything like it to even make realistic guesses into what it would turn out to.
I wasn’t old enough to argue against social media and the surveillance it brought.
Now AI comes along. And I am old enough. And I am experienced enough in a similar space. And I have seen what similar technology have done and brought. And I have taken all that and my conscience and instinct tells me that AI is not a net good.
Previous generations have failed us. But we make do with the world we find ourselves born into.
I find it absurd that experienced engineers today look at AI and believe it will make their children’s lives better, when very recent history, history they themselves lived through, tells a very different story.
All so they can open 20 PRs per day for their employers.
Someone coined a term for those of the general population who trust this small group of billionaires and defend their technology.
“Dumb fucks”
https://www.darioamodei.com/essay/the-adolescence-of-technol...
the whole thing reads as "it's going to be so powerful! give money now!"
I wonder what name the tech bros will come up to call us for the same feeling nowadays.
Ridiculous to say the technology, by itself, is evil somehow. It is not. It is just math at the end of the day. Yes you can question the moral/societal implications of said technology (if used in a negative way) but that does not make the technology itself evil.
For example, I hate vibe coding with a passion because it enables wrong usage (IMHO) of AI. I hate how easy it has become to scam people using AI. How easy it is to create disinformation with AI. Hate how violence/corruption etc could be enabled by using AI tools. Does not mean I hate the tech itself. The tech is really cool. You can use the tech for doing good as much as you can use it for destroying society (or at the very minimum enabling and spreading brainrot). You choose the path you want to tread.
Just do enough good that it dwarfs the evil uses of this awesome technology.
Democratisation of tech has allowed for more good to happen, centralisation the opposite. AI is probably one of the most centralisation-happy tech we've had in ages.
Capitalism demands profits. Competition is bad for profits. Multiple factories are bad for profits. Multiple standards are bad for profits. Expensive workers are bad for profits.
AI is literally trained on by humans, used by humans. If humans are doing awful things with it, then it's because humans are awful right now.
I strongly feel this is related to the rise of fascism and wealth inequality.
We need a great conflict like WW2 to release this tension.
Not really - it's math, plus a bazillion jigabytes of data to train that math, plus system prompts to guide that math, plus data centers to do that math, plus nice user interfaces and APIs to interface with that math, plus...
Anyway, it's just kind of a meaninglessly reductive thing to say. What is the atom bomb? It's just physics at the end of the day. Physics can wreck havoc on the world; so can math.
That said, their thinking is that this can remove labor from their production, all while stealing works under the very copyright they setup. So I'd call that "evil" in every conventional sense.
>Just do enough good that it dwarfs the evil uses of this awesome technology.
The evil is in the root of the training, though. And sadly money is not coming from "good". I don't see any models focusing on ensuring it trains only on CC0/FOSS works, so it's hard to argue of any good uses with evil roots.
If they could do that at the bare minimum, maybe they can make the argument over "horses vs cars". As it is now, this is a car powered by stolen horses. (also I work in games, and generative AI is simply trash in quality right now).
This also ignores the broken fucking copyright system that ensures once you create something you get many lifetimes of fucking off without having to work, so if genAI kills that I won't shed a tear.
Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.
More my confusion is the person I was responding to complaining about people arguing morality, which seems incredibly important to discuss. Lack of facts obviously makes discussions bad but there's definitely not some dichotomy with discussing morality (at least not with the people I know. My issue has not nearly been as much with people arguing morality, which is often my more productive arguments, and more people with a fundamentally incompatible view on what the facts are).
Someone who's clear-eyed about the facts is much more likely to have a guilty conscience/think someone's actions are unconscionable.
I don't mean to argue either side in this discussion, but both sides might be ignoring the facts here.
okay, what are the "facts and reality" here? If you're just going to say "AI is here to stay", then you 1) aren't dealing with the core issues people bring up, and 2) aren't brining facts but defeatism. Where would be if we used that logic for, say, Flash?
You can still criticise without being mean.
I can certainly criticize specific things respectfully. If I prioritised demonstrating my moral superiority I could loudly make all sorts of disingenuous claims that won't make the world a better place.
I certainly do not think people should be making exploitative images in Photoshop or indeed any other software.
I do not think that I should be able choose which software those rules apply to based upon my own prejudice. I also do not think that being able to do bad things with something is sufficient to negate every good thing that can be done with it.
Countless people have been harmed by the influence of religious texts, I do not advocate for those to be banned, and I do not demand the vilification of people who follow those texts.
Even though I think some books can be harmful, I do not propose attacking people who make printing presses.
What exactly are you requiring here. Pitchforks and torches? Why AI and not the other software that can be used for the same purposes?
If you want robust regulation that can provide a means to protect people from how models are used then I am totally prepared (and have made submissions to that effect) to work towards that goal. Being antagonistic works against making things better. Crude generalisations convince no-one. I want the world to be better, I will work towards that. I just don't understand how anyone could believe vitriolic behaviour will result in anything good.
Stable Diffusion enabled the average lazy depraved person to create these images with zero effort, and there's a lot of these people in the world apparently.