What's fascinating is that this value elevation seems to have gone largely unchallenged, despite being in essence an arbitrary value choice. Other choices are possible. And, I hope, still are possible, despite what the bigcorps declare must be the case in order to maximize shareholder returns.
It is harder to measure craft, care, or wonder. My best proxy is emails from real people, but those are sporadic, unpredictable, and a lot harder to judge than analytics screens that update every minute.
[1] https://news.ycombinator.com/item?id=46705588
[2] https://www.reddit.com/r/ExperiencedDevs/comments/1qj03gq/wh...
I think that really high quality code can be created via coding agents. Not in one prompt, but instead an orchestration of planning, implementing, validating, and reviewing.
Its still engineering work. The code still matters. Its just a different tool to write the code.
I'd compare the difference between manually coding and operating a coding agent to the difference between a handsaw and a chainsaw - the end result is the same but the method is very different.
I dont think anyone really cares at all about LLM code that is the exact same end result as the hand written version.
It's just in reality the LLM version is almost never the same as the hand written version, it's orders of magnitude worse.
This is not my experience *at all*. Maybe models from like 18+ months ago would produce really bad code, but in general most coding agents are amazing at finding existing code and replicating the current patterns. My job as the operator then is to direct the coding agent to improve whatever it doesn't do well.
I've heard this from a few smart people whom I know really well. They strongly believe this, they also believe that most people are deluding themselves, but not them - they're in the actually-great group, and when I pointed out the sloppiness of their LLM-assisted work they wouldn't have any of it.
I'm specifically talking about experienced programmers who now let LLMs write majority of their code.
If you add a second, skilled programmer, just having two people communicating imperfectly drops quality to 90% of the base.
If I add an LLM instead, it drops to maybe 80% of my base quality. But it's still not bad. I'm reading the diffs. There are tests and fancy property tests and even more documentation explaining constraints that Claude would otherwise miss.
So the question is if I can get 2x the features at 80% of the quality, how does that 80% compare to what the engineering problem requires?
From my own observations, the types of people I previously observed to be sloppy in their thought processes and otherwise work, correlates almost perfectly with those that seem most eager to praise LLMs.
It's almost as if the ability to identify bullshit, makes you critical of the ultimate bullshit generator.
Lots of times I could just write it myself and be done with it
America went full car to a point where just going to the shops from the suburbs is a car drive. Crossing the ROAD needs a car in way too many places.
There are cities where you can find a shop for essentials within walking distance, bigger shops need a short to medium drive, but can be still walked to if you really want to.
Or do you suppose this is the best AI will ever get?
Plus "planning, implementing, validating, and reviewing" would be a bit like walking anyway in your analogy.
Do you have any advice to share (or resources)? Have you experienced it yourself?
If you flow state involves elaborating complimentary specifications in parallel, it's marvelous
What does high quality code look like?
> The code still matters.
How so?
I argue the code still matters because of these 3 reasons. If the code doesn't work, your product won't work. If its not secure, there's obvious consequences. If you can't build new features quickly, you will end up wasting money/time.
Rumor has it there were a few elite crafters among the lot. Software wizards who pondered about systems and architecture as they had a $10 espresso macchiato.
When writing software for yourself, there is a bias towards implementing just the features you want and never mind the rest. Sometimes the result can be pretty sloppy, but it works.
However, code health is a choice. You just need to know what to ask for. A coding agent can be used as a power washer to tidy up a project. This won't result in great art, but like raking leaves or cleaning your steps or plowing a driveway, it can be satisfying.
Just as you wouldn't use a power washer to clean a painting, maybe there's some code that's too delicate to use a coding agent on? But for a project that has good tests and isn't that delicate, which I believe includes most web apps, nobody's going to want to pay for you to do it by hand anymore. It would be like paying someone to clear the snow in a parking lot with a shovel rather than hiring someone with a plow.
This here so much. When some group paying you millions is saying they want a feature or they will look at competitors all kinds of crap ends up in the software.
Don’t forget that managers have different goals than file and rank employees.
For SaaS I work for we get requirements like required fields for a process that manager needs to have correct data and for insights into business process.
After we deliver software we get support tickets from employees that are using system nagging that “it takes too much time to fill in all this data” and that we should “fix our shitty system”.
They don’t care and they don’t have full knowledge why stuff is required - which is fault of managers that are not training their people and explaining “why”.
Oh and of course they have to copy paste shit over and over because their company won’t have budget for us integrating with their CRM and we won’t invest in something that benefits only single customer who might not renew the license next year - but also they don’t want to make a commitment like 5 years contract where we could do some investment. Of course there are some that invest in connecting the CRM but it mostly is an exception rather than the rule.
Enterprise software can be because there isn't an incentive mismatch, good solution is more valuable for the customers, it will sell better and they're willing to pay for it.
But like you say, lot of enterprise software is bad because it's optimized for the payer, not the user, and it's often shoehorned to weird workflows of the particular enterprise.
- Power tools didn't annihilate the craftsmanship of hand-tool woodworking. Fine woodworkers are still around and making money using hand tools, as well as hobbyists. But contractors universally switched to power tools because they help them make more money with less labor/cost/time.
- A friend of mine still knits on a loom because she likes using a loom. Some people knit by hand because they like that better. Neither of them stopped just because of large automated looms.
- Blacksmiths still exist and make amazing metal crafts. That doesn't mean there isn't a huge market for machine cast or forged metal parts.
In the future there'll just be the "IDE people" and the "Agent Prompt people", both plugging away at whatever they do.
200 years ago, being a blacksmith was a viable career path. Now it's not. The use of hand tools, hand knitting, and hand forging is limited to niche, exotic, or hobbyist areas. The same could be said of making clothes by hand or developing film photographs. Coding will be relegated to the same purgatory: not completely forgotten, but considered an obsolete eccentricity. Effectively all software will be made by AI. Students will not study coding, the knowledge of our generation will be lost.
Everything gets worse overtime. Even before AI, I was constantly complaining about how technology is enshittifying. I'm sure my parents complained about things getting worse, and their parents. Yet here we are, the peak achievement of living beings on this planet, making do. I think we will be OK without typing in by hand a thing that didn't even exist 70 years ago.
It's a question of supply and demand in the labor market. Right now, we are paid well and afforded respect because demand for our service is higher than the supply. When anyone can use AI to do our job, the supply will exceed the demand.
There are blacksmiths still working today. Their work is niche. And although blacksmithing today requires no less skill than it did 200 years ago, there is significantly less demand, and very few can make a living at it.
I doubt the laborer would describe their toil as "craft".
Programmers have become accustomed to a lot of cultural and financial respect for their work. That's about to disappear. How do you think radio actors felt when they were displaced by movies? Or silent film actors when they were displaced by talkies?
> I doubt the laborer would describe their toil as "craft".
Intellectual labor is labor. I'm a laborer in programming and I definitely consider it a craft. I think a lot of people here at HN do.
I'm not sure who you are arguing against. No one here said that the world isn't changing. But it seems to me that the people who are disadvantaged by AI, which is potentially everyone who doesn't own a data center, should take efforts to ensure their continued survival, instead of merely becoming serfs to the ruling oligarchs.
If I knit a hat, I can sell it once, but if I make a game, I can run or sell it repeatedly.
However, I still agree with the outcome - if AI becomes even better and is economically viable - number of people handcrafting software will reduce drastically.
Given the echo chamber of HN when it comes to AI that certainly seems inevitable. The question is - who would work on novel things or further AI model improvements if it so happens that knowledge of writing software by hand disappears?
1. AI will work on AI. 2. People will work on AI, but as a rare niche, not a mainstream area of software development.
I don't see a huge difference between people writing in a high-level language and people writing complex prompts.
AI agents isn't coding in Common Lisp home made macro DSL, is me doing in one hour doing something that could have taken a couple of days, even if I have to fix some slop along the way.
Thus I can already see the trend that started with MACH architecture and SaaS products, to go even further decreasing the team sizes required for project delivery.
Projects I used to be part of a 10 people team, are started to be sized into 5 or less.
We seem to take everything for granted now and forget what real engineering is like.
This review is 13 years old by itself:
https://www.theguardian.com/books/2012/mar/25/turings-cathed...
The standard: Forth words should be a few lines of code with a straightforward stack effect. Top level of a program might be 5 words.
LLM will generate some subroutines and one big word of 20-50 lines of nested IF..THEN..ELSE and DO..WHILE just as if it writing C.
I remember when I first started out programming 20 years ago, there was time to craft good quality code. Then there were more and more pushes to get more code out faster, and no one really cared about the quality. Bugs became part of the cost of doing business. I think GenAI for code fits well in the more recent paradigm, and comparing it with hand-crafted code of yore is a bit disingenuous, as appealing as it may be, because most code hasn't been that good for a long time.
I am sad to admit it, but AI is just fitting in where poor coding practices have already existed, and flourishing in that local maxima.
Business doesn't care about the craft, they care that the use case is solved, even if the code is crap under the hood.
But there’s also a few that are really good. I know people that used Romanian and Polish shops that did great work. They weren’t super-cheap, but still cheaper than American contract developers.
I assume that the bad offshoring orgs are pretty nervous about AI. I also assume that the better ones are learning to incorporate AI as a force multiplier.
Interesting times, ahead…
We usually call that nearshore over here in Europe, exactly to differentiate the expectations of what gets delivered.
A nice example is guitar building: there's a whole bunch of luthiers that stick to traditional methods to build guitars, or even just limit themselves to japanese woodworking tools.
But that is not the only way to build great guitars. It can be done by excellent luthiers, building high quality quitars with state of the art tools. For example Ulrich Teuffel who uses all sorts of high tech like CAD systems and CDC machines to craft beautiful guitars: https://www.youtube.com/watch?v=GLZOxwmcFVo and https://www.youtube.com/watch?v=GLZOxwmcFVo
Unfortunately, craftsmanship does not come cheap, so most customers will turn to industrially created products. Same for software.
Sure, LLMs as a tool probably have a place in software development, but the danger lies in high volume, low oversight.
But there's people using it large scale to build large applications, time will tell how they work out in the end. Software engineering is programming over time, and the "over time" for LLM based software engineering hasn't been long enough yet.
You are a lead architecture, a product manager, a lead UXer, a lead architect. You don't have 100% control over what your LLM devs are doing, but more than you think. Just like normal managers don't micromanage every action of their team.
No, you don't, you have "influence" or "suggestion".
You can absolutely narrow down the probability ranges of what is produced , but there is no guarantee that it will stick to your guidelines.
So far, at least, it's just not how they work.
> You don't have 100% control over what your LLM devs are doing, but more than you think. Just like normal managers don't micromanage every action of their team.
This overlooks the role of actual reasoning/interpretation that is found when dealing with actual people.
While it might seem like directing an LLM is similar in practice to managing a team of people, the underlying mechanisms are not the same.
If you analyse based on comparisons between those two approaches, without understanding the fundamental differences in what's happening beneath the surface, then any conclusions drawn will be flawed.
---
I'm not against LLM's, i'm against using them poorly and presenting them as something they are not.
If you check the commit log, you'll see small increments. The architecture document is what i have it generate to validate the created architecture: https://github.com/ako/backing-tracks/blob/main/docs/ARCHITE...
Other than that most changes start with the ai generating a proposal document that i will review and improve, and then have it built. I think this was the starting proposal: https://github.com/ako/backing-tracks/blob/main/docs/DSL_PRO...
This started as a conversation in Claude Desktop, which it then summarized into this proposal. This i copied into claude code, to have it implemented.
This is probably just a disagreement about the term "control", so we can agree to disagree on that one i suppose.
The rest of the reply doesn't really relate to any of the points i mentioned.
That it's possible to successfully use the tool to achieve your goals wasn't in dispute.
I'll try to narrow it down:
---
> You are not a victim at the mercy of your LLM.
Yes, you absolutely are, it's how they work.
As i said, you can suggest guidelines and directions but it's not guaranteed they'll be adhered to.
To be clear , this also applies to people as well.
---
Directing an LLM (or LLM based orchestration system) is not the same as directing a team of people.
The "interface" is similar in that you provide instructions and guidelines and receive an attempt at the wanted outcome.
However, the underlying mechanisms of how they work are so different that the analogy you were trying to use doesn't make sense.
---
Again, LLM's can be useful tools, but presenting them as something they aren't only serves to muddy the waters of understanding how best to use them.
---
As an aside, IMO, the sketchy salesmen approach to over-promising on features and obscuring the the limitations will do great harm to the adoption of LLM's in the medium to long term.
The misrepresentation of terminology is also contributing to this.
The term AI is intentionally being used to attribute a level of reasoning and problem solving capability beyond what actually exists in these systems.
And in practice i have more control with a coding agent than with developers as i can iterate over ideas quickly: "build this idea", "no change this", "remove this and replace it with this". Within an hour you can quickly iterate an idea into something that works well. With developers this would have taken days if not more. And they would've complained i need to better prepare my requirements.
If it's working for you, great, but presenting it like it's a general direct replacement for development teams is disingenuous.
---
> Looks like we just have different expectations: i don't want to micromanage my coding agents any more than i micromanage the developers i work with as a product manager. If the output does what it is supposed to do, and the software is maintainable and extendable by following certain best practices, i'm happy. And i expect that goes for most business people.
None of what i said implied any expectations of the process of using the tools, but if you've found something that works for you that's good.
On the subject of maintainability and extension, that is usually bound to the level of complexity of the project and the increase in requirements is not generally linear.
I agree, many business people would love what you've described, very few are getting it.
> And in practice i have more control with a coding agent than with developers as i can iterate over ideas quickly: "build this idea", "no change this", "remove this and replace it with this". Within an hour you can quickly iterate an idea into something that works well. With developers this would have taken days if not more. And they would've complained i need to better prepare my requirements.
Up to a point, yes.
If your application of this methodology works well enough before you hit the limitations of the tooling, that's great.
There is , however, a threshold of complexity where this starts to break down, this threshold can be mitigated somewhat with experience and a better understanding on how to utilise the tooling, but it still exists (currently).
Once you reach this threshold the approaches you are talking about start to work less effectively and even actively hinder progress.
There are techniques and approaches to software development that can further push this threshold out, but then you're getting into the territory of having to know enough to be able to instruct the LLM to use these approaches.
Except for the other 50% of the time where it goes off the rails and does what you explicitly asked it not to do.
I’ve been writing various single-purpose software tools of these sorts for decades. I would not want to go back to hand-writing them now that I can have agents (cursor, claude code, etc) lay down the algorithmic architecture that I vibe at them, now that I know how to “speak that language” and reliably get the software outcomes that I seek.
I find this similar to how I would not want to spend all day turning the crank handles on a manual milling machine when I can have a CNC mill do it, now that I know how to use various CAM systems well and have the proper equipment.
Given that my overall craft is not limited to just writing code or turning crank handles, I readily embrace any improvements of my workshop “technology stack” so that I can produce higher quality artwork.
Beautiful work!
If AI can be used to make bad (or good enough) software more cheaply, I have no problem with that. I'm sure we will get a huge amount of bad software. Fine.
But what matters is whether we get more great software as well. I think AI makes that more likely rather than less likely.
Less time will be spent on churning out basic features, integrations and bug fixes. Putting more effort into higher quality or niche features will become economically viable.
It might just all meld into a mediocre soup of features.
To be clear not against AI assisted coding, think it can work pretty great but thinking about the implications for future engineers.
That's true of any particular individual but not for a company that can decide to hire someone who can do more than prompting.
>It might just all meld into a mediocre soup of features
I don't think the relative economics have changed. Mediocre makes sense for a lot of software categories because not everyone competes on software quality.
But in other areas software quality makes a difference it will continue to make a difference. It's not a question of tools.
That's the problem.
I think one promising shift direction is humans do NOT like to talk to bots, especially not for anything important. It's biological. We evolved to learn from and interact with other humans, preferably the same group over a long time, so we really get to understand/mirror/like/support each other.
when CNC came for machining, no one really bitched, because the computers were just removing the time consuming effort of moving screws by hand.
when computers write code, or screenplays, the quality right now is objectively much worse. that might change, but claims that we're at the point where computers can meaningfully displace that work are pretty weak.
sure that might change.
I'd argue that we are, because cloth of higher quality than anything that has ever existed before is available today, it's just really expensive. But high quality cloth was also expensive back then.
I think you are making the error of comparing cheap clothes of today with expensive clothes of the past, rather than cheap clothes with cheap clothes and expensive with expensive. People of the past might have had higher quality clothes on average, but its because those clothes were expensive despite being the cheapest available. Trust me, if Shein was around in 1780, everyone would be wearing that garbage.
On the other hand, you could not buy a Gore-Tex Pro shell or an ultralight down jacket for any price in 1800.
I don’t disagree that there are a lot of gains, including on net, just that it hasn’t been a pareto improvement with no losses at all.
We have clothes and materials like gortex now that blocks rain and snow no handmade jacket could ever hope to perform at the same level to be lightweight AND dry.
At the cost of massive environmental, animal and human health.
1. "The technology's capability was inferior to what humans were creating, therefore the quality of the output dropped."
2. "The costs of employing humans created a floor to the price/quality you could offer and still make a profit. Without the human labor, a lower-quality product became possible to offer."
The first is a question of engineering, the second is a question of economic choice and market-fit.
The fabric and clothes were worse, and cheaper. This put many traditional workers out of business, making actually good clothes scarcer, and eventually, more expensive than they previously were.
like polluting every inch of the Earth with microplastics!
"Luddites were not opposed to the use of machines per se (many were skilled operators in the textile industry); they attacked manufacturers who were trying to >>circumvent standard labor practices<< of the time."
Luckily, the brave government's troops, show trials and making '"machine breaking" (i.e. industrial sabotage) a capital crime"' solved the crisis of these awful, entitled workers' demands once and for all and across all time.
I'm sure that any uppity workers in our present age can also be taught the appropriate lessons.
I was mostly annoyed because Luddites were an early labor movement and their demands were, by modern standards, normal, but they are continually invoked like they believed and demanded things they did not.
I do agree the profession, if we dare diminish coding to exactly one thing, is changing, but I believe the direction of that evolution, unchecked, will exacerbate the ongoing attenuation of the power of labor in the US, even as workers individually become more 'productive'.
Quoting you from elsewhere: "Reminder that the most reliable way to prevent the rise of the far right is to implement robust safety nets and low inequality, to reduce status anxiety and grievance."
We agree here, and likely elsewhere, so keep fighting the good fight on this orange hell site (and also outside if you're able).
Not so much evolution here, I'm afraid. Just a plain redistribution of wealth upwards thanks to new tools that made large chunk of workforce obsolete. How this will affect industry in 15 years - nobody seem to think about that.
Why hire an experienced coder to create project X, when you can just use an open source project and hire cheaper and less experienced coders to make updates? I've been part of many of these conversations with business leaders and management over the years.
Developers have been giving away their work and devaluing their profession for decades and has basically turned it into digital factory work.
It's why I stopped writing software professionally almost a decade ago.
AI is using all of this open source to train and will eventually put you out of a job.
Dario Amodei, CEO of Anthropic, thinks that this disruption be different from that one. From his article The Adolescence of Technology, currently on HN's front page:
> AI will be capable of a very wide range of human cognitive abilities—perhaps all of them. This is very different from previous technologies like mechanized farming, transportation, or even computers. This will make it harder for people to switch easily from jobs that are displaced to similar jobs that they would be a good fit for. For example, the general intellectual abilities required for entry-level jobs in, say, finance, consulting, and law are fairly similar, even if the specific knowledge is quite different. A technology that disrupted only one of these three would allow employees to switch to the two other close substitutes (or for undergraduates to switch majors). But disrupting all three at once (along with many other similar jobs) may be harder for people to adapt to. Furthermore, it’s not just that most existing jobs will be disrupted. That part has happened before—recall that farming was a huge percentage of employment. But farmers could switch to the relatively similar work of operating factory machines, even though that work hadn’t been common before. By contrast, AI is increasingly matching the general cognitive profile of humans, which means it will also be good at the new jobs that would ordinarily be created in response to the old ones being automated. Another way to say it is that AI isn’t a substitute for specific human jobs but rather a general labor substitute for humans.
Software quality about speed of delivery and lack of bugs.
If you're fine with software which gets a little bit harder to work on every time you make a change and which might blow up in unexpected ways, AI is totally fine.
Ive yet to meet many AI champions who are explicit about their desire to make that trade off though. Even the ones who downplay software quality arent super happy about the bugs.
human written code is similarly fine. Save for very few human individuals.
While the speed and scale at which these happen is definitely important (and I agree that AI code can pose a problem on that front), this applies to every human-written piece of software I've ever worked on too.
Let me tell you why I like shopping from amazon instead of going to a super market...
But also the older I get I keep wanting to visit the store in person. It's not to see the other human, I just want to hold the thing I want to buy and need it immediately instead of waiting. I feel like there isn't enough time anymore.
I remember a sales person in a store actively trying to shame me for purchasing from a brand he didn't prefer. I gave him a lot of chances to get off of me, respectfully and firmly. Some really are like blood sucking leeches, they don't come off. He was probably paid to do that.
Nonsense. We never evolved to send text messages and yet here we are with social networks, chat systems and emails used everywhere for everything.
Unfortunately for the general populace, most technological improvements in information technology, for the past 5 decades, has lead to loss of political control and lessened their leverage for political change.
With AI, this change is going to be accelerated a 100 times.
With current AI slop, and more importantly, almost indistinguishable from reality, AI based content, the populace is going slowly learning to reject what they see and what they hear from mass media.
AI has muddied the pool so much, that every fish, us, cannot see the whole pool. What this will lead to, is for political figures and bad actors to, much more easily almost with no effort at all, create isolation among people.
No event will create a mass uprising, because no event can be believed by a common mass. It will be easy to generate an alternative reality using the same AI.
Now, the political class and the billionaire class, are free to act with impunity, because the last check on their power, the power of mass media to form public opinion, to inspire the masses to demand change or accountability, has eroded to the point of no return. (They have already captured the institutions of public power)
I fear for the future of humanity.
Edit : There are already troubling signs from the billionaire class regarding this. There is a narrative to "ensure guardrails" for AI, sort of giving the populace the idea that once that is done, AI is acceptable. This is like saying, "better have a sleeve on the knife, so that no one can cut with it, but use it as a prop in a movie"
They are creating this narrative that AI is inevitable.
They are fear mongering that AI is going to take jobs, which it will, but it also goads the capable ones to get on to the bandwagon and advance AI further.
AI will cause plenty of problems of course, because it puts a powerful tool in the hands of those who shouldn't use it, but also, many people shouldn't drive cars or use the internet.
AI is inevitable, just as calculators and computers were. I suspect that no humans will write code by hand in a not so distant future. The machines will be far too effective.
Personally, I love the acceleration we're seeing since the coming of computers. The internet is a big black mirror, scary, beautiful, and ugly, just like us. AI feels similar to me.
Will some things get worse because of AI? Probably. But maybe it'll also help to save us from ourselves. If nothing else, it will probably force some investment into long overdue security, identity, and trust issues.
I have recently started exploring AI-coding -- note that I said AI coding and not vibe-coding because that is for the brain-dead.
By AI coding, I mean I know the inputs, outputs, structures the code should have and the necessary context to write the code. Then, articulating the requirements in English as best as I can and feeding it to agents.
Needless to say, the code is pathetic; it chooses to implement meaningless abstractions even after explicitly providing the design and plan to follow.
I don’t understand how we, as a collective species, agreed to believe the criminally wrong lies of tech CEOs that, instead of implementing a “reliable system by hand,” we choose to convey our ideas and instructions in an “ambiguous,” “inconsistent,” and “context-dependent” language (English), which is then passed through a probabilistic system to generate the reliable system.
As for lies and bad code, it didn't appear with AI. Humans lied and produced bad code before AI.
How does the author empirically know AI does not understand? And if it does not understand right now, is a machine fundamentally unable to understand? Is understanding an exclusive human ability? Is it because machines lack a soul? It sounds quite dualistic (Descartes'view that mind and body and fundamentally different).
Don't get me know, I think right now, AI is less a good at understanding humans than other humans (or even dogs) in many contexts because it has no access to non verbal signals. But in the context of building software, it is good enough and I don't see why a machine should not be able to understand humans.
I hired back on at a company I used to work at and found they had contracted work to another former employee who was handed the code from a rest api I had written and a web app that used it. The task was to write an android app that interacted with the api.
He ran it through an agentic coding assistant and got out api scaffolding and basic UI.
Looking it over, I couldn't shake the feeling I was looking at my code, just ported to kotlin. I was seeing my idiosyncrasies everywhere. It was kind of surreal.
I was familiar with the dev's work who did it and it was nothing like his prior work, but it's been years since I have seen him, so who knows.
I couldn't help but admit it was a good foundation to start building on.
I told the pm they were likely overpaying significantly for an agentic coding assistant and only getting access to it for a few hours a month. This same organization recoiled in terror when I pointed out the cost of a claude code subscription once.
Especially with my rules:
- Prefer simple, boring solution
- Before adding complexity to work around a constraint, ask if the constraint needs to exist.
- Remember: The best code is often the code you don't write.
AI is just the next example, and the internet is particularly troublesome at filtering up the most tacky.
Always remember the broadly positive aspects remain, they're simply obscured by dusty clouds from the bandwagon.
Use tools to expand your capability.
> AI code [..] may also free up a space for engineers seeking to restore a genuine sense of craft and creative expression
This resonates with me, as someone who joined the industry circa 2013, and discovered that most of the big tech jobs were essentially glorified plumbers.
In the 2000s, the web felt more fun, more unique, more unhinged. Websites were simple, and Flash was rampant, but it felt like the ratio of creators to consumers was higher than now.
With Claude Code/Codex, I've built a bunch of things that usually would die at a domain name purchase or init commit. Now I actually have the bandwidth to ship them!
This ease of dev also means we'll see an explosion in slopware, which we're already starting to see with App Store submissions up 60% over the last year[0].
My hope is that, with the increase of slop, we'll also see an increase in craft. Even if the proportion drops, the scale should make up for it.
We sit in prefab homes, cherishing the cathedrals of yesteryear, often forgetting that we've built skyscrapers the ancient architects could never dream of.
More software is good. Computers finally work the way we always expected them to!
[0]https://www.a16z.news/p/charts-of-the-week-the-almighty-cons...
Most tech jobs are glorified plumbers. I've worked in big tech and in small startups, and most of the code everywhere is unglamorous, boring, just needs to be written.
Satisfaction with the job also depends on what you want out of it. I know people who love building big data pipelines, and people who love building fancy UIs. Those two groups would find the other's job incredibly tedious.
The pay is nice but I find myself…remarkably unenvious as I get older.
Every type of work has its idiosyncrasies, which people will either get on with or not. Mentioning one without the others is a bit disingenuous, or its whatever the opposite of the grass-is-greener bias is.
I've been using AI coding tools heavily for the past year. They're genuinely useful for the "plumbing" - glue code, boilerplate, test scaffolding. But where they consistently fail is reasoning about system-level concerns: authorization boundaries, failure modes, state consistency across services.
The article mentions AI works best on "well-defined prompts for already often-solved problems." This is accurate. The challenge is that in production, the hard problems are rarely well-defined - they emerge from the interaction between your code and reality: rate limits you didn't anticipate, edge cases in user behavior, security assumptions that don't hold.
Craft isn't about writing beautiful code. It's about having developed judgment for which corners you can't cut - something that comes from having been burned by the consequences.
That's why I'm of the opinion that for senior developers/architects, these coding agents are awesome tools.
For a junior developer? Unless they are of the curious type and develop the systems-level understanding on their own... I'd say there's a big chance the machine is going to replace their job.
Popular music tends to be generic. Popular content is mostly brainrot these days. Popular software is often a bloated mess because most users’ lives don’t revolve around software. They use software to get something done and move on.
I never understood the appeal of “craft” in software. Early computer pioneers were extremely limited by the tech of their time, so the software they hacked together felt artsy and crafty. Modern software feels industrial because it is industrial - it’s built in software factories.
Industrial software engineers don’t get paid to do art. There are research groups that do moonshot experiments, and you can be part of that if it’s your thing. But lamenting the lack of craft in industrial software is kind of pointless. Imagine if we’d stopped at crafty, handmade auto engines and never mass-produced them at scale. We don’t lament “crafty engines” anymore. If you want that, go buy a supercar.
Point is: AI is just another tool in the toolbox. It’s like Bash, except calling it that won’t pull in billions of dollars in investment. So “visionaries” call it ghost in the machine, singularity, overlord, and whatnot. It produces mediocre work and saves time writing proletariat software that powers the world. Crafty code doesn’t pay the bills.
But I’m not saying we shouldn’t seek out fun in computing. We absolutely should. It’s just that criticizing AI for not being able to produce art is an old thing. The goalpost keeps shifting, and these tools keep crushing it.
I don’t use AI to produce craft, because I don’t really do craft in software - I have other hobbies for that. But I absolutely, proudly use it to generate mediocre code that touches millions of people’s lives in some way.