Some people love programming, for the sake of programming itself. They love the CS theory, they love the tooling, they love most everything about it.
Other people see all that as an means to an end - and find no joy from the technical aspect of creating something. They're more interested in the end result / product, rather than the process itself.
I think that if you're in group A, it can be difficult to understand group B. In vice versa.
I'm a musician, so I love everything about creating music. From the theory, to the mastery of the instrument, the tens of thousands of hours I've poured into it...finally being able to play something I never thought I'd be able to, just by sheer willpower and practice. Coming up with melodies that feel something to me, or I can relate to something.
On the other hand, I know people that want to jump straight to the end result. They have some melody or idea in their head, and they just want to generate some song that revolves around that idea.
I don't really look down on those people, even though the snobs might argue that they're not "real musicians". I don't understand them, but that's not really something I have to understand either.
So I think there are a lot of devs these days, that have been honing their skills and love for the craft for years, that don't understand why people just want things to be generated, with no effort.
I think it's worth pointing out that most people are both these things at different times.
There's things I care about and want a deep understanding of but there's plenty of tasks I want to just "go away". If I had an junior coder - I'd be delegating these. Instead I use AI when I can.
There's also tasks where I want a jump start. I prefer fixing/improving code over writing from scratch so often a bad AI attempt is still valuable to me.
Why should I have a junior developer who is going to do negative work instead of poaching a mid developer who is probably underpaid since salary compression and inversion are real?
As a manager, say I do hire a junior developer, invest time into them and they level up. I go to the HR department and tell them that they deserve a 30% raise to bring them inline with the other mid level developers.
The HR department is going to say that’s out of policy and then the developer jumps ship.
> Why should I have a junior developer who is going to do negative work instead of poaching a mid developer who is probably underpaid since salary compression and inversion are real?
The tragedy of the commons in a nutshell. Maybe everyone should invest in junior developers so that everyone has mid-level developers to poach later?
Not only that but teaching is a fantastic way to learn. Its easy to miss the learning though because you get the most when you care. If you care you take time to think and you're forced to contend with things you've taken for granted. You're forced to revisit the things you've tabled because you didn't have the time or expertise to deal with it at the time.
There's no doubt about it, there's selfish reasons to teach, mentor, and have a junior under you. We're social creatures. It should be no surprise that what's good for the group is usually good for yourself too. It's kinda as if we were evolutionarily designed to be this way or something ¯\_(ツ)_/¯
Everyone says they don't have time, but you get a lot of time by doing things right instead of doing things twice. And honestly, we're doing it a lot more than twice.
I just don't understand why we're so ready and willing to toss away a skill that allowed us to become the most successful creature on the planet: forethought. It's not just in coding but we're doing it everywhere. Maybe we're just overloaded but you need forethought to fix that, not progressively going fast for the sake of going fast
I’m not a manager by the way, my previous comment was more of a devil’s advocate/hypothetical question.
I leveled up because I practice mentoring others. But it still doesn’t make sense for the organization to hire juniors. Yes I realize someone has to. It’s especially true for managers who have an open req to fill because they need work done now.
On the other hand, my one, only and hopefully last role in BigTech where I worked previously, they could afford to have an intern program and when they came back after college have a 6 month early career/career transition program to get them up to speed. They could afford the dead weight loss.
Many have said that it's useful to delegate writing boilerplate code to an AI so that you can focus on the interesting bits that you do want to write yourself, for the sake of enjoying writing code.
I recognize that and I kind of agree, but I think I don't entirely. Writing the "boring" boilerplate gives me time to think about the hard stuff while still tinkering with something. I think the effect is similar to sleeping on it or taking a walk, but without interrupting the mental cruncing that's going in my brain during a good flow. I piece together something mundane that is as uninteresting as it is mandatory, but at the same time my subconscious is thinking about the real stuff. It's easier that way because the boilerplate does actually, besides being boring, still connect to the real stuff, ultimately.
So, you're kind of working on the same problem even if you're just letting your fingers keep typing something easy. That generates nice waves of intensity for my work. My experience regarding AI tends to break this sea of subconsciousness: you need to focus on getting the AI to do the right thing which, unlike typing it yourself, is ancillary to the original problem. Maybe it's just a matter of practise and at some point I can keep my mind on the domain in question eventhough I'm working an AI instead of typing boilerplate myself.
The first time you write the code to accomplish something you get your highs.
IMHO there's no joy in doing the same thing multiple times. DRY doesn't help with that, you end up doing a lot of menial work to adapt or integrate previous code.
I've always distilled this down to people who like the "craft" and those who like the "result".
Of course, everything is on a scale so it's not either/or.
But, like you, how I get there matters to me, not just the destination.
Outside the context of music, a project could be super successful but if the journey was littered with unnecessary stress due to preventable reasons, it will still leave a bad taste in my mouth.
> I've always distilled this down to people who like the "craft" and those who like the "result".
I find it very unlikely anyone who only likes the results will ever pick up the craft in the first place
It takes a very specific sort of person to push through learning a craft they dislike (or don't care about) just because they want a result badly enough
What's "the result"? Because I don't like how this divide is being stated (it's pretty common).
Seems to me that "the result" is "the money" and not "the product".
Because I'd argue those that care about the product, the thing being built, the tool, take a lot of pride in their work. They don't cut corners. They'll slog through the tough stuff to get things done.
These things align much more with the "loves coding" group than "the result". Frankly, everyone cares about "the result" and I think we should be clear about what is actually meant
The issue with programming is that it isn't like music or really any other skill where you get feedback right away and operate in a well understood environment. And a lot of patterns are not well designed as they are often based on what a single developer things the behavior ought to be instead of something more deterministic like the laws of physics that influence the cord patterns we use in music.
Nope, your code might look excellent. Why the hell isn't it running though? Three hours later you find you added a b when you closed your editor somewhere in the code in a way your linter didn't pick up and the traceback isn't clear about, maybe you broke some all important regex, it doesn't matter. One second, it's fixed, and you just want to throw the laptop out the window and never work on this project again. So god damned stupid.
And other things are frusterating too. Open a space deliminated python file, god forbid you add a tab without thinking. And what is crazy about that is if the linter is smart enough to say "hey you put a tab here instead of spaces for indent" then why does it even throw the error and not just accept both spaces and tabs? Just another frustration.
Really I would love to just go at it, write code, type, fly, be in the flow state, like one does building something with the hands or making music or doing anything in the physical world. But no. Constant whack a mole. Constantly hitting the brakes. Constant blockers. How long will this take to implement? I have no fucking idea man, could be 5 seconds or 5 weeks and you don't often know until you spend the 5 seconds and see that didn't do it yet.
I’m in group A and B. I do programming for the sake for it at home. I read tons of technical books for the love of it. At work, though, I do whatever the company wants or whatever they allow me… I just do it for the money.
Some people like to play a musical instrument, others to compose music. Those who play range from classicists, who have limited ability to improvise or interpret, to popular or jazz, or composition, where creativity and subtle expression is the life blood of the work.
Programming is similar to music. (A great many software innovators in the 70s and 80s had musical roots). But AI prunes away all the creativity and stylistic expression from the composition and the performance when designing and building software, reducing the enterprise to mere specification -- as if the libretto of the opera were merely an outline, and even that was based on Cliff Notes.
The case for using AI to code is driven strictly by economics and speed. Stylistically and creatively, AI is a no-brainer.
> On the other hand, I know people that want to jump straight to the end result. They have some melody or idea in their head, and they just want to generate some song that revolves around that idea.
I don't really look down on those people, even though the snobs might argue that they're not "real musicians". I don't understand them, but that's not really something I have to understand either.
So if someone generates their music with AI to get their idea to music you don’t look down on it?
Personally I do, if you don’t have the means to get to the end you shouldn’t get to the end and that goes double in a professional setting. If you are just generating for your own enjoyment go off I guess but if you are publishing or working for someone that’ll publish (aka a professional setting) you should be the means to the end, not AI.
If you're talking about a person using an LLM, or some other ML system, to help generate their music then the LLM is really just a tool for that person.
I can't run 80 mph but I can drive a car that fast, its my tool to get the job done. Should I not be allowed to do that professionally if I'm not actually the one achieving that speed or carrying capacity?
Personally my concerns with LLMs are more related to the unintended consequences and all the unknowns in play given that we don't really know how they work and aren't spending much effort solving interoperability. If they only ever end up being a tool, that seems a lot more in line with previous technological advancements.
> I can't run 80 mph but I can drive a car that fast, its my tool to get the job done.
Right, but if you use a chess engine to win a chess championship or if you use a motor to win a cycling championship, you would be disqualified because getting the job done is not the point of the exercise.
Art is (or should be) about establishing dialogues and connections between humans. To me, auto-generated art it's like choosing between seeing a phone picture of someone's baby and a stock photo picture of a random one - the second one might "get the job done" much better, but if there's no personal connection then what's the point?
Project Managers will tell you that "getting to a place" is the goal
Then you get to the place and they say "now load all of the things in the garage into the truck"
But oops. You didn't bring a truck, because all they told you was "please be at this address at this time", with no mention of needing a truck
My point is that the purpose of commercial programming is not usually just to get to the goal
Often the purpose of commercial programming is to create a foundation that can be extended to meet other goals later, that you may not even be remotely aware of right now
If your foundation is a vibe coded mess that no one understands, you are going to wind up screwed
And yes, part of being a good programmer includes being aware of this
I work with quite a few F100 companies. The actual amount of software most of them create is staggering. Tens of thousands of different applications. Most of it is low throughput and used by a small number of employees for a specific purpose with otherwise low impact to the business. This kind of stuff has been vibe coded long before there was AI around to do it for you.
At the same time human ran 'feature' applications like you're talking about often suffer from "let the programmer figure it out" problems where different teams start doing their own things.
What has always held true so far: <new tool x> abstracts challenging parts of a task away. The only people you will outcompete are those, who now add little over <new tool x>.
But: If in the future people are just using <new tool x> to create a product that a lot of people can easily produce with <new tool x>, then, before long, that's not enough to stand out anymore. The floor has risen and the only way to stand out will always be to use <new tool x> in a way that other people don't.
People who can't spin pottery shouldn't be allowed to have bowls, especially mass produced by machine ones.
I understand your point, but I think it is ultimately rooted in a romantic view of the world, rather than the practical truth we live in. We all live a life completely inundated with things we have no expertise in, available to us at almost trivial cost. In fact it is so prevalent that just about everyone takes it for granted.
> So if someone generates their music with AI to get their idea to music you don’t look down on it?
It depends entirely on how they're using it. AI is a tool, and it can be used to help produce some wonderful things.
- I don't look down on a photographer because they use a tool to take a beautiful picture (that would have taken a painter longer to paint)
- I don't look down on someone using digital art tools to blur/blend/manipulate their work in interesting ways
- I don't look down on musicians that feed their output through a board to change the way it sounds
AI (and lots of other tools) can be used to replace the creative process, which is not great. But it can also be used to enhance the creative process, which _is_ great.
If they used an algorithm to come up with a cool melody and then did something with it, why look down on it?
Look at popular music for the last 400 years. How is that any different than simply copying the previous generations stuff and putting your own spin on it?
If you heard a CD in 1986 then in 2015 you wrote a song subconsciously inspired by that tune, should I look down on you?
I mean, I'm not a huge fan of electronic music because the vast majority of it sounds the same to me, but I don't argue that they are not "real musicians".
I do think that some genres of music will age better than others, but that's a totally different topic.
I think you don't look down at the product of AI, only the process that created it. Clearly the craft that created the object has become less creative, less innovative. Now it's just a variation on a theme. Does such work really deserve the same level of recognition as befitted Beethoven for his Ninth or Robert Bolt for his "A Man for all Seasons"?
Your company doesn’t care about how you got to the end, they just care about did you get there and meet all of the functional and non functional requirements.
My entire management chain - manager, director and CTO - are all technical and my CTO was a senior dev at BigTech less then two years ago. But when I have a conversation with any of them, they mostly care about whether the project I’m working on/leading is done on time/within budget/meets requirements.
As long as those three goals are met, money appears in my account.
One of the most renown producers in hip hop - Dr. Dre - made a career in reusing old melodies. Are (were) his protégés - Easy-E, Tupac, Snoop, Eminem, 50 cent, Kendrick Lamar, etc - not real musicians?
Have you heard the saying there is too much performance in the practice room? It's the same with programming. Performance is the goal, and practice is how you get there. No one seems to be in danger of practicing too much though.
i mean how far are you willing to take that argument? every decade has just been a new abstraction, imagine people flipping switches or in raw assembly talking about how they don't "understand" you now with your no effort. or even those who don't "understand" why you use your autocomplete and fancy IDE, preferring a simple text editor.
i say this as someone who cut my teeth on this stuff growing up and seeing the evolution, it's both. and at some point it's honestly elitism and gatekeeping. i sort of cringe when it's called a "craft" because it's not like woodworking or something. the process is both full of joy but so is the end result, and the nature of our industry is that the process is ALWAYS changing.
you accumulate a depth of knowledge and watch as it washes away in a few years. that kind of change, and the big kind of change that AI brings scares people so they start clinging to it like it's some kind of centuries old trade lol.
It is not just gatekeeping. It is a stubborn refusal to see that one could be programming something much more sophisticated if they could use these iteration loops efficiently.
Many of these folks would do well to walk over to the intersection of Market, Bush, and Battery Streets in San Francisco and gaze up at the Mechanics Monument.
> It is a stubborn refusal to see that one could be programming something much more sophisticated if they could use these iteration loops efficiently
Programming something more sophisticated with AI? AI is pretty much useless if you're doing anything somewhat novel. What it excels at is vomiting code that has already been written a million times so you can build yet another Electron cross-platform app.
I have actually had some really great flow evenings lately, the likes of which I have not enjoyed in many years, precisely because of AI-assisted coding. The trick is to break the task down in to components that are of moderate complexity so that the AI can handle them (Gemini 2.5 Pro one-shots), and keep your mind on the high-level design which today's AI cannot coordinate.
What helps me is to think of it like I'm a kid again, learning to code full of ideas but without any pre-conceived notions. Rather than the Microsoft QuickBasic manual in my hands, I've got Gemini & Claude Code. I would be gleefully coding up a storm of games, websites, dubious webcrawlers, robots, and lord knows what else. Plenty of flow to be had.
I always wonder what kind of projects are we talking about.
I am currently writing a compiler and simulation engine for differential-algebraic equations. I tried few models, hoping they would help me, but they could not provide any help with small details nor with bigger building blocks.
I guess if you code stuff that had been coded a lot in public repos, it is fine, otherwise AI does not help in any way. Actually, I think I wasted more time trying to make it produce the output I wish for than it took me to do this myself.
That's been my experience. If it's been solved a million times, it's helpful. If you're out on the frontier where there's no public code, it's worse than useless.
If you're somewhere in between (where I am now) it's situationally useful for small sub-components but you need to filter it heavily or you'll end up wasting a day or two going down a wrong rabbit-hole either because you don't know the domain well enough to tell when it's bullshitting or going down a wrong path, or don't know the domain well enough to use the right keyword to get it to cough up something useful. I've found domain knowledge essential for deciding when it's doing something obviously wrong instead of saying "I don't know" or "This is the wrong approach to the problem".
For the correct self-contained class or block of code, it is much faster to specify the requirements and go through a round or two of refinement than it is to write it myself. For the wrong block of code it's a complete waste of time. I've experienced both in the last few days.
I don't even think you have to be on the frontier for LLMs to lose most of their effectiveness. Large legacy codebases with deeply ingrained tribal knowledge and loads of idiosyncrasies and inconsistencies will do the trick. Sad how most software projects end in this state.
Obviously LLMs in this situation will still be insanely helpful, but in the same way that Google searches or stack overflow is insanely helpful.
For me it's been toy games built on web languages, which happens to be something I toyed with via my actual raw skills for the past 15 years. LLMs have opened many new doors and options for what I can build because I now technically "know everything" in the world via LLMs. Stuff that I would get stuck wasting hours on is now solved in minutes. But then it ALWAYS reaches a point where the complexity the LLM has generated is too much and the model can no longer iterate on what it's built.
people seem to forget this type of argument from the article was used for stack overflow for years, calling it the destruction of programming. "How can you get into flow when you are just copying and pasting?". Those same people are now all sour grapes for AI assisted development. There will always be detractors saying that the documentation you are using is wrong, the tools that you are using are wrong, and the methodology you are using is wrong.
AI assisted development is no different from managing an engineering team. "How can you trust outsourced developers to do anything right? You won't understand the code when it breaks"... "How can you use an IDE, vim is the only correct tool" etc etc etc.
Nothing has changed besides the process. When people started jumping on object orientation they called procedures the devil itself, just as procedures were once called structured programming and came to banish away the considered harmful goto. Everything is considered harmful when theres something new around the corner that promises to either make development more productive or developers more interchangeable. These are institutional requirements and will never go away.
Embrace AIOP (AI oriented programming) to banish copy and paste google driven development which is now considered harmful.
The issue with "AIOP" is that you don't have a litany of others (as is the case with SO) providing counter examples, opinions, updated best practices, etc. People take the AI output as gospel and suffer for it without being exposed so the ambiguity that surrounds implementing things.
Will an engineering team ever be able to craft a thing of wonder, that surprises and delights? I think great software can do that. But I've seen it arise only rarely, and almost always as originating from one enlightened mind, someone who imagined a better way than the well-trod paths taken by so many who went before. I can imagine AI as a means to go only 'where man gas gone before'.
I'm a classic engineer, so lots of experience with systems and breaking down problems, but probably <150 hours programming experience over 15 years. I know how computers work and "think", but I an awful at communicating with them. Anytime I have needed to program something I gotta crash course the language for a few days.
Having LLMs like 2.5 now are total game changers. I can basically flow chart a program and have Gemini manifest it. I can break up the program into modules and keep spinning up new instances when context gets too full.
The program I am currently working on is up to ~5500 LOC, probably across 10ish 2.5 instances. It's basically an inventory and BOM management program that takes in bloated excel BOMs and inventory, and puts it in an SQLite database, and has a nice GUI. Absolutely insane how much faster SQLite is for databases than excel, lol.
I've heard a _lot_ of stories like this. What I haven't heard is stories about the deployment of said applications and the ability of the human-side author to maintain the application. I guess that's because we're in early days for LLM coding, or the people who did this aren't talking (about their presumed failures... people tend to talk about successes publicly, not the failures).
At my day job I have 3 programs written by LLM used in production. One written by GPT-4 (in spring 2023) and recently upgraded by gemini 2.5, and the other two by Claude 3.7
One is a automatic electronics test system that runs tests and collects measurements (50k+ readings across 8-12 channels)(GPT-4, now with a GUI and faster DB thanks to 2.5). One is a QC tool to help quickly make QC reports in our companies standard form (3.7). And the last is a GUI CAD tool for rendering and quickly working through ancient manufacturing automation scripts from the 80's/90's to bring them up to compatibility with modern automation tooling (3.7).
I personally think that there is a large gap between what programs are, and how each end user ultimately uses them. The programs are made with a vast scope, but often used narrowly by individuals. The proprietary CAD program that we were going to use originally for the old files was something like $12k/yr for a license. And it is a very powerful software package. But we just needed to do one relatively simple thing. So rather than buy the entire buffet, buy the entire restaurant, Claude was able to just make simple burger.
Would I put my name on these and sell to other companies? No. Am I confident other LLM junkies could generate similar strongly positive outcomes with bespoke narrow scope programs? Absolutely.
Added joy for me as well mostly by giving me the relevant API calls I need straight away, from publically available documentation, instead of having to read docs myself. "How do I do X in Y"
And if something's not obvious I can always fetch the specifics of any particular calls. But at least I didn't have to find the name of that call in the first place.
There's nothing stopping you from coding if you enjoy it. It's not like they have taken away your keyboard. I have found that AI frees me up to focus on the parts of coding I'm actually interested in, which is maybe 5-10% of the project. The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about. I care about certain things that I know will make the product better, and achieve its goals in a clever and satisfying way.
Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.
> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.
Even more so, I remember making a Chrome extension and feeling intimidated. I knew that I'd be comfortable with most of it given that JS is used but I just didn't know how to start.
With an LLM it is way faster to spin up some default config and get going versus reading a tutorial. What I've noticed in that respect is that I just read what it does and then immediately reason why it's there. "Oh, there's a manifest.json file with permissions and a few other things, fair, makes sense. Oh, so you have the HTML/CSS/JS of the extension, you have the HTML/CSS/JS of the page you're injecting some code into and you have the JS of a background worker. Ah yea, I get that."
Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.
> It is better to read documentations and tutorials first.
I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.
> Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".
I interpreted the "hallucination" part as the AI using functions that don't exist. I don't consider that a problem because it's immediately obvious.
Yes, AI can suggest syntactically valid code that does the wrong thing. If it obviously does the wrong thing, then that's not really an issue either because it should be immediately obvious that it's wrong.
The problem is when it suggests something that is syntactically valid and looks like it works but is ever slightly wrong. But in my experience, it's pretty common to come across that stuff like that in "tutorials" as well.
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
I pretty strongly disagree. As soon as it became popular for developers to have a "brand", the amount of garbage started growing. The stuff written before the late 00's was mostly good, but after that the balance began slowly shifting towards garbage. AI definitely increased the rate at which garbage was generated though.
> Yes, AI can suggest syntactically valid code that does the wrong thing
To be fair, I as a dev with ten or fifteen years experience I do that too. That's why I always have to through test the results of new code before pushing to production. People act as if using AI should remove that step, or alternatively, as if it suddenly got much more burdensome. But honestly it's the part that has changed least for me since adopting an AI in the loop workflow. At least the AIncan help with writing automated tests now which helps a bit.
Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?
Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.
The kind of documentation you’re looking for is called a tutorial or a guide, and you can always buy a book for it.
Also something are meant to be approached with the correct foundational knowledge (you can’t do 3D without geometry, trigonometry, and matrixes. And a healthy dose of physics). Almost every time I see people strugling with documentation, it was because they lacked domain knowledge.
What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?
That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".
1. Code doesn't compile. This case is obvious on what to do.
2. Code does compile.
I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.
You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.
There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.
AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)
Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.
Sample size of 1, but it definitely did in my case. I've gained a lot more confidence when coding in domains or software stacks I've never touched before, because I know I can trust an LLM to explain things like the basic project structure, unfamiliar parts of the ecosystem, bounce ideas off off, produce a barebones one-file prototype that I rewrite to my liking. A whole lot of tasks that simply wouldn't justify the time expenditure and would make it effort-prohibitive to even try to automate or build a thing.
Because I've used it for problems where it hallucinated some code that didn't actually exist but that was good enough to know what the right terms to search for in the docs were.
I think the fear for those of us who love coding, stability and security, that we are going to be confronted with apples that are rotten on the inside and our work, our love, is going to turn (even more so) into pain. The challenge in computing is that the powers that decide have little overview over the actual quality and longevity of any endeavour.
I work as a consultant assessing other people's code and it's hard not to lose my religion, sort of speak.
So much this. The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts) and lets me focus on the interesting bits like what it is I want to build and how the pieces should fit together. And debugging, which I find satisfying.
Sadly, I find it sorely lacking at dealing with build systems and that particular type of boilerplate, mostly because it seems to mix up different versions of things too much and gives you totally broken setups more often than not. I’d just as soon never deal with the he’ll that is front end build/lint/test config again.
AI generated tests are genuinely fantastic, if you treat them like any other AI generated code and review them thoroughly.
I've been writing Python for 20+ years and I still can't use unittest.mock without looking up the details every time. ChatGPT and Claude are great at that, which means I use it more often because I don't have to deal with the frustration of figuring it out.
Just as with anything else AI, you never accept test code without reviewing it. And often it needs debugging. But it handles about 90% of it correctly and saves a lot of time and aggravation.
Depends on the language. Python for instance has a massive default library, and there are entire modules I use anywhere from one a year to once a decade —- or never at all until some new project needs them.
I struggle to think how one person is supposed to interact with that many languages on a daily (or even weekly) basis.
I’ve been on projects with multiple languages, but the truly active code was done in only two. The other languages were used in completed modules where we do routine maintenance and rare alterations.
"I struggle to think how one person is supposed to interact with that many languages on a daily (or even weekly) basis."
LLMs. I've expanded the circle of languages I use on a frequent basis quite dramatically since I started leaning on LLMs more. I used to be Python, SQL and JavaScript only. These days I'm using jq, AppleScript, Bash, Go, awk, sed, ffmpeg and so many more.
I used to avoid infrequently used DSLs because I couldn't hold them in my memory. Now I'll happily use "the best tool for the job" without worrying about spinning up on all the details first.
They perhaps haven’t taken away your keyboard but anecdotally, a few friends work at places where their boss is requiring them to use the LLMs. So you may not have to code with them but some people are starting to be chained to them.
Yes, there are bad places to work. There are also places that require detailed time tracking, do not allow any time to write tests, have very long hours, tons of on-call alerts, etc.
How long until it becomes the rule because of some arbitrary "productivity" metric? Sure, you may not be forced to use it, but you'll be fire for being "unproductive".
No, because it's usually a few years old and already obsolete - the frameworks and the language have gone through a gazillion changes and what you did in 2021 suddenly no longer works at all.
I mean, the training data also has a cutoff date and changed beyond that are not reflected in the code suggestions.
Also, I know that people love to joke on modern software and JS in particular. But if you take react code from 2020 and drop it into a new react codebase it still works. Even class based components work. Yes, if you jumped on the newest framework bandwagon every time stuff will break all the time, but AI won’t be able to help you with that either. If you went for relatively stable frameworks, you can re use boilerplate completely or with relatively minimal adjustments
True. But LLMs have access to the web. I’ve told ChatGPT plenty of times to verify an SDK API or if I knew the API was new, I just gave it a link to the documentation. This was mostly around various AWS SDKs
The search improvements to o3 and o4-mini have made a huge difference in the last couple of weeks.
I ran this prompt (and others like it) and it actually worked!
This code needs to be upgraded to the new
recommended JavaScript library from
Google. Figure out what that is and
then look up enough documentation to
port this code to it
Ehh most people are good about at least throwing a warning before they break a legacy pattern. And you can also just use old versions of your tools. I'm sure the 2021 tool still does the job. Most people aren't working on the bleeding edge here. Old versions of numpy are fine.
I keep seeing that suggestion as well and the only sensible way I see would be to use one off boilerplate, anything else does not make sense.
If you keep re-using boilerplate once in a while copying it from elsewhere is fine. If you re-use it all the time, just get a macro setup in your editor of choice. IMHO that is way more efficient than asking AI to produce somewhat consistent boilerplate
You know. I have my boilerplate in Rails and it is just a work of art... I simply clone my BP repo, bundle, migrate, run and I have user management, auth, smtp client, sms alerts, and literally everything I need to get started. And it was just this same week I decided to try a code assistant, and my result was shockingly good, once you provide the assistant with a good clean starting point, and if you are very clear on what you want to build, then the results are just too good to be dismissed.
So yes, boilerplate, but also yes, there is definitely something to be gained from using ai assistants.
Like many others writing here, I enjoy coding (well, mostly anyway), especially the when it requires deep thought and patient experimentation to get anywhere. It's also great to preside over finally wiring together the routines (modules, libraries) that bind a project into a coherent whole.
Haven't much used AI to assist. After all, hard enough finding authentic humans capable and willing to voluntarily review/critique one's code. So far AI doesn't consistently provide that kind of help. OTOH seems almost certain over time AI systems will improve in terms of specific and comprehensive "insights" into the particular types of code one is writing.
I think an issue is that human creativity is hard to measure. Likely enough AI is even tougher to assess. Probably AI will increasingly be assigned tasks like constructing project skeletons, assuring parts can be joined together without undue strain, handling "boilerplate" and other routine chores. To be sure the landscape will look different in 50 years, I'm certain we'd be amazed were we able to see what future systems will be doing.
In any case, we shouldn't hesitate to use tools that genuinely boost our creativity. One badly needed role would be enabling development of higher reliability software. Still that's a far cry from the contributions emanating from the best of human originality, talent and motivation.
> doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.
All major tokenisers have explicit support for encoding arbitrary byte sequences. There's usually a consecutive range of tokens reserved for 0x00 to 0xFF, and you can encode any novel UTF-8 words or structures with it. Including emoji and characters that weren't a part of the model's initial training, if you show it some examples.
Pretty sure that we’re talking apples and oranges. Yes to the arbitrary byte sequences used by tokenizers, but that is not the topic of discussion. The question is will the tokenizer come up with words not in the training vocabulary. Word tokenizers don’t, but character tokenizers do.
Source: Generative Deep Learning by David Foster, 2nd edition, published in 2023. From “Tokenization” on page 134.
“If you use word tokens: …. willnever be able to predict words outside of the training vocabulary.”
"If you use character tokens: The model may generate sequences of characters that form words outside the training vocabulary."
Of course, it all depends how you use the LLM. While the same can be true for StackOverflow, the LLMs just scale the issues up.
> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.
Except you do care. It's why you're frustrated and annoyed. And good!!! That feeling is because what you're describing requires solving. If something is routine, automate it. But it's really not good to automate in a statistical way, especially when that statistical tool is optimized for human preference. Because remember that also means mistakes are optimized to be missed by humans.[0]
With expertise in anything, I'm sorry, but you also got to do the shit work. To be a great musician you gotta practice boring scales. It's true even if you just want to be a sub par one.
But a little grumpy is good. It drives you to fix things, and frankly, that's our job. The things that are annoying and creating friction don't need be repeated over and over, they need alternative solutions. The scripts you build are valuable. The "useless" knowledge you gain isn't so useless. Those little details add up without you knowing and make you better.
That undocumented code makes you frustrated and reminds you to document your own. You don't want to be a hypocrite. The author of the thing you're using probably thought the same thing: "No one is gonna use this garbage, I'm not going to waste my time documenting it". Yet here we are. Over and over again yet we don't learn the lesson.
I'm not gonna deny there's assholes. There are. But even assholes teach you. At worst, they teach you how not to act.
And some people are you telling you to RTM and not RTFM. Sure, it has lots of extra information in it that you don't need to get your specific job done, but have you also considered that it has lots of extra information in it? The person that wrote it clearly thought the context was important. Maybe it isn't. In that case, you learned a lesson in how not to write documentation!
What I'm getting at is that there's a lot of learning done all over the place. Trying to take out all the work and only have "the fun" is harming yourself and has a large potential to make less time for the fun stuff[0]. I'd be surprised if I'm alone in this, but a lot of stuff I enjoy now was stuff that originally frustrated me. IME this is pretty common! It's true for every person I know. Similarly, it's also true for things I learned that I thought I'd never use again. It always has a way of coming back.
I'm not going to pretend it's all fun and games. I'm explicitly saying it's not. But I'm confident in the long run it's better. Despite the lack of accuracy, I use LLMs (and Google, and even the TFM) like I would a solution guide the homework problems when I was in school. Try first, then consult. The struggle is an investment in your future. It sucks, but if all the best things in life were easy then we'd all have them. I'm just trying to convince you that it pays off.
I'm completely aware this is all context dependent. There's a time and place for everything. But given the percentages you mention (even taken as exaggeration), something sounds wrong. It's hard to suggest specific solutions without details but I'd be surprised if there weren't better and more rewarding solutions than having the LLM do it for you
[0] That's the big danger and what drives distrust in them. Because you need to work extra hard to find mistakes, increasing workload, not decreasing, because debugging is most of the job!
While it looks like a productivity boost, there's a clear price to pay. The more you use it, the less you learn and the less you are able to assess quality.
Frankly I don't want to spend 2 hours reading documentation just to find out some arcane incantation that gets the computer to do what I want it to do.
The interesting part of programming to me is designing the logic. It's the 'this, then that, except when this' flow that I'm really interested in, not the search for some obscure library that has some function that will parse this csv.
Llms are great for that, and let me get away from the pointless grind and into the things that I enjoy and are actually providing value.
The pair programming is also a super good thing. I work best when I can spitball and throw out random ideas and get quick feedback. Llms let me do that without bothering others who have their own work to do.
Most comments here surprise me: I am using Githubs Copilot / ChatGPT 4.0 at work with a code base which is mostly implements a basic CRUD service... and outside of small/trivial example (where the generated code is mostly okay), prompting is more often than not a total waste of time. Now, I wonder if I am just totally unable to write/refine good prompts for the LLM (as it works for smaller samples, I hope I am not too far off) or what could explain the huge discrepancy of experience.
(Just for the record: I would totally not mind if the LLM writes the code for the stuff I have to do at work.)
To clarify my questions:
- Who here uses LLMs to generate code for bigger projects at work? (>= 20k lines of code)
- If you use LLMs for bigger projects: Do you need to change your prompting strategy to get good results?
- What programming languages are you using in your code bases?
- Are there other people here who experience that LLMs are no help for non trivial problems?
I'm in the same boat. I've largely stopped using these tools other than asking questions about a language that I'm less familiar with or a complex type in typescript for which it can be helpful (sometimes). Otherwise, I felt like I was just wasting my time and becoming lazier/worse as a developer. I do wonder whether LLMs have hit a wall and we're in a hype cycle.
Yes, I have the same feeling about the wall/hype cycle. Most of my time is understanding code and formulating a plan to change code w/o breaking anything... even if LLMs would generate 100% perfect code on the first try, it would not help in a big way.
One thing I forgot to mention is asking LLMs questions from within the IDE instead of doing a web search... this works quite nice, but again, it is not a crazy productivity boost.
Same here. We have a massive codebase with large classes and the LLMs are not very helpful. Frontend stuff is okay sometimes but the backend models are too complex at this point, I guess.
Play with Cursor or Claude Code a bit and then make a decision. I am not on the this is going to replace Devs boat, but this has changed the way I code and approach things.
Could you perhaps point me to a youtube video which demonstrates an experienced prompter sculpting code with Cursor/Clause Code?
In my search I just found trivial examples.
My critic so far:
- Examples seem always to be creating a simple application from scratch
- Examples always use super common things (like create a blog / simple website for CRUD)
What I would love to see (see elsewhere): Adding a non trivial feature to a bigger code base. Just a youtube video/demonstration. I don't care about language/framework etc. ...
This morning I made this while sipping coffee, and it solves a real problem for my gf: https://github.com/kirubakaran/xmldiffer Sure it's not enormous, and it was built from scratch, but imho it's not a trivial thing either. It would've taken me at least a day or two of full time work, and I certainly don't have a couple of days to spare on a task like this. Instead, pair programming with AI made it into a fun relaxing activity.
You are just bad with prompting or working with very obscure language/framework or bad coding pattern or all of it.
I had a talk with a seasoned engineer who has been coding for 50 years and has created many amazing things over lifetime about him having really bad results with AI tools I suggested for him.
When I use AI for the same purposes in the same repo he's working on, it works nicely. When he does it, results are always not what he wants.
It comes down to a combination of him not understanding how to guide the LLMs to correct direction and using a language/framework (he's not familiar with) he can't judge the LLMs output.
It is really important to know what you want, be able to describe it in short points (but important points). Points that you know ai will mess up if you don't specify. And also be able to figure out which direction the ai is heading with the solution and correct it EARLY rather than later.
Not overloading context/memory with unnecessary things. Focusing on key areas to improve and much more.
I'm using AI to get solutions done that I can definitely do myself but it'll take a certain amount of time to hunt down all documentation, API/lib calls etc. With AI, 1/10th time is enough.
I've had massive success with java, js/TS, html css, go, rust, python, bitbucket pipelines/GitHub actions, cdk, docker compose, SQL, flutter/dart, swift etc.
I've had the same experience as the person to whom you're responding. After reading your post, I have to ask: if you're putting so much effort into prompting it with specific points, correcting it often, etc., why not just write the code yourself? It sounds like you're putting a good deal of effort into prompting it.
Aren't you worried that overtime you'll rely on it too much and your offhand knowledge will get worse?
I have read somewhere, that LLMs are mostly helpful to junior developers.
Is it possible the person claiming success with all these languages/tools/technologies is just on a junior level and is subjectively correct but has no point of reference how fast coding is for seniors and how quality code looks like?
Not OP, it be comes natural and doesn't take a lot of time.
Anyway, if you want to, LLMs can today help with a ton of programming languages and frameworks. If you use any of the top 5 languages and it still doesn't work for you, either you're doing some esoteric work or you're doing it wrong.
Could you point me to a youtube video or a blog post which demonstrates how LLMs help writing code which outperforms a proficient human?
My only conditions:
- It must be demonstrated by adding a feature on a bigger code base (>= 20 LOC)
- The added feature cannot be a leaf feature (means it must integrate with the rest of the system at multiple points)
- The prompting has to be less effort/faster than to type the solution in the programming language
You can chose any programming language/framework that you want. I don't care if it is Java, JavaScript, Typescript, C, Python, ... hell, I am fine with any language with or w/o a framework.
I do not rule out, that I am just very bad with prompting.
It just surprises me, that you write you had massive successes with "java, js/TS, html css, go, rust, python, bitbucket pipelines/GitHub actions, cdk, docker compose, SQL, flutter/dart, swift etc.", if you include the usual libraries/frameworks and the diverse application areas for these technologies, even with LLMs support it seems to me crazy to be able to make meaningful contributions in non trivial code bases.
Concerning SQL I can report another fail with LLMs, in a trivial code base with a handful of entities the LLM cannot come up with basic window functions.
I would be very interested if you could write up a blog post or could make a youtube video demonstrating your prompting skills... Perhaps demonstrating with a bigger open source project in any of the mentioned languages how to add a non trivial feature with your prompting skills?
> Now, I wonder if I am just totally unable to write/refine good prompts for the LLM (as it works for smaller samples, I hope I am not too far off) or what could explain the huge discrepancy of experience.
Programming language / stack plays plays a big role, I presume.
This comment section really shows the stark divide between people who love coding and thus hate AI, and people who hate coding and thus love AI.
Honestly, I suspect the people who would prefer to have someone or something else do their coding, are probably the devs who are already outputting the worst code right now.
I don't know if I'm a minority but I'd like to think there are a lot of folks like me out there.
You can compare it to someone who is writing assembly code and now they've been introduced to C. They were happy writing assembly but now they're thrilled they can write things more quickly.
Sure, AI could lead us to write buggier code. Sure, AI could make us dumber because we just have AI write things we don't understand. But neither has to be the case.
With better tools, we'll be able to do more ambitious things.
I think there are a lot of us, but the people who dislike AI are much more vocal in online conversations about it.
(The hype merchant, LinkedIn influencer, Twitter thread crowd are super noisy but tend to stick to their own echo chambers, it's rare to have them engage in a forum like Hacker News directly.)
I hate the reality of our current AI, which is benefitting corporations over workers, being used for surveillance and censorship (nevermind direct social control via misinformation bots), and is copying the work of millions without compensating them in order to do it.
And the push for coders to use it to increase their output, will likely just end up meaning expectations of more LoC and more features faster, for the same pay.
How is using Claude over Llama benefitting corporations over workers? I work with AI every day and sum total of my token spend across all providers is less than a single NVidia H100 card I'd have to buy (from a pretty big corporation!), at the very least, for comparable purpose?
How are self-hosted LLMs not copying the work of millions without compensating them for it?
How is the push for more productivity through better technology somehow bad?
Right, just how back in the day, people who loved writing assembly hated high level languages and people who found assembly too tedious loved compilers.
Picasso explicitly wanted his designs (for cutlery, plates, household items he designed) to be mass-produced, so your question is not as straightforward as you make it to be.
What is the connection to machine generated code? He designed the items manually and mass produced them.
No one objects to a human writing code and selling copies.
Apart from that, this is the commercial Picasso who loved money. His early pre-expressionist paintings are godlike in execution, even if someone else has painted a Pierrot before him.
I very much understand the result of code that it writes. But I have never gotten paid to code. I get paid to use my knowledge of computers and the industry to save the company money or to make the company money.
Do you feel the same way when you delegate assignments to more junior developers and they come back with code?
It is absolutely possible to enjoy both- I have used LLMs to generate code for ideas about alternate paths to take when I write my code- but prompt generation is not coding, and there are WAY too many people who claim to be coding when they have in fact done nothing of the sort.
> a far higher intensity
I'm not sure what this is supposed to mean. The code that I've gotten is riddled with mistakes and fabrications. If I were to use it directly, it would significantly slow my pace. Likewise, when I use LLMs to offer alternative methods to accomplish something, I have to take the time to sit down and understand what they're proposing, how to actually make it work, and whether that route(s) would be better than my original idea. That is a significant speed reduction.
The only way I can imagine LLMs resulting in "far higher intensity" is if I was just yolo'ing the code into my program, and then doing frantic integration, correction, and bugfix work afterwards.
Sure, that's "higher intensity", but that's just working harder and not smarter.
What if I prefer to have a clone of me doing my coding, and then I throw my clone under the bus and start to (angrily) hyperfocus explore and change every piece to be beautiful? Does this mean I love coding or I hate coding?
It's definitely a personality thing, but that's so much more productive for me, than convincing myself to do all the work from scratch after I had a design.
I guess this means I hate coding, and I only love the dopamine from designing and polishing my work instead of making things work. I'm not sure though, this feels like the opposite of hate coding.
That's where we start to disagree what future looks like, then.
It's not there yet, in that the LLM-clone isn't good enough. But amusingly a not nearly good enough clone of me already made me more productive, in that I'm able to deliver more while maintaining the same level of personal satisfaction with my code.
The question of increasing productivity and what that means for us as laborers is another entire can of worms, but that aside, I have never yet found LLM-gen'd code that met my personal standards, and sped up my total code output.
If I want to spend my time refactoring and bugfixing and rewriting and integrating, rather than writing from scratch and bugfixing, I can definitely achieve that by using LLM code, but the overall time has never felt different to me, and in many cases I've thrown out the LLM code after several hours due to either sheer frustration with how it's written, or due to discovering that the structure it's using doesn't work with the rest of the program (see: anything related to threading).
> This comment section really shows the stark divide between people who love singing and thus hate AI-assisted singing, and people who hate singing and thus love AI-assisted singing.
> Honestly, I suspect the people who would prefer to have someone or something else do their singing, are probably the singers who are already outputting the worst singing right now.
The point is: just because you love something, doesn't mean you're good at it. It is of course positively correlated with it. I am in fact a better singer because I love to sing compared to if I never practiced. But I am not a good singer, I am mediocre at best (I chose this example for a reason, I love singing as well as coding! :-D)
And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
I think the analogy/ substitution falls apart in that singing is generally not very stable or lucrative (for 99.999% of singers), so it is pretty rare to find someone singing who hates it. Much less uncommon to find people working in IT who hate the specific work of their jobs.
And I think we do tend to (rightfully) look down on e.g. singers who lip-sync concerts or use autotune to sing at pitches they otherwise can't, nevermind how we'd react if one used AI singing instead of themselves.
Yes, loving something is no guarantee of skill at it, but hating something is very likely to correspond to not being good at it, since skills take time and dedication to hone. Being bad at something is the default state.
I have been working in IT for 5 years while being a professional musician for 8 years (in France and touring in Europe).
I've never met a single singer who told me they hate singing, on other hand, I can't even count how many of my colleagues told me how much they hate coding.
Another analogy would be with sound engineering. I've met sound engineer who hate their job as they would rather play music. They are also the ones whose jobs are likely to be replaced by AI.
And I would argue that the argument stand stills. AI Sound Engineers who hate working on sound are often the bad sound engineers.
> I think the analogy/ substitution falls apart in that singing is generally not very stable or lucrative (for 99.999% of singers), so it is pretty rare to find someone singing who hates it.
I tried to cover this particular case with:
> And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
---
> Yes, loving something is no guarantee of skill at it, but hating something is very likely to correspond to not being good at it, since skills take time and dedication to hone. Being bad at something is the default state.
I tried to cover this particular case with:
> It is of course positively correlated with it.
---
> Being bad at something is the default state.
Well, skill-wise yes. But being talented at something can happen, even when you hate something.
> And I think we do tend to (rightfully) look down on e.g. singers who lip-sync concerts or use autotune to sing at pitches they otherwise can't, nevermind how we'd react if one used AI singing instead of themselves.
Autotune is de rigueur for popular music.
In general, I'm not sure that I agree with looking down on people.
I love coding - but I am not very good at it. I can describe what I want in great detail, with great specificity. But I am not personally very good at turning that detailed specification into the proper syntax and incantations.
AI is like jet fuel for me. It’s the translation layer between specs and code I’ve always wanted. It’s a great advisor for implementation strategies. It’s a way to try new strategies in code quickly.
I don’t need to get anyone else to review my code. Most of this is for personal projects.
I don’t really write professionally, so I don’t have a ton of need for it to manage realities of software engineering (large codebases, peer reviews, black box internal systems, etc). That being said - I do a reasonable amount of embedded Linux work, and AI understands the Linux kernel and device drivers very well.
To extend your metaphor: AI is like a magic microphone that makes all of my singing sound like Tony Rice, my personal favorite singer. I’ve always wanted to sound like him - but I never will. I don’t have the range or the training. But AI allows my coding level to get to that corresponding level with writing software.
Do you love coding, or do you love creating programs?
It seems like the latter given your metaphor being a microphone to make you seem like you could sing well, i.e. wanting the end state itself rather than the achievement via the process.
"wanted to sound like him" vs "wanted to sing like him"
I enjoy using tools to create, very much so. The process is fun to me. The thing I create is a record of the process/ work that went into it. Planning and making a cut with a circular saw feels good. Rattling a spray paint can is exciting.
I made a cyber deck several months back, and I opted to carve the case from wood rather than 3d printing or using a premade shell. That hands-on work is something I'm proud of. I don't even use the deck much, it was for the love of building one.
To be fair, I don't have any problem with people who do their jobs for the paycheck alone, because that's the world capitalism has forced us into. Companies don't care about or reward you for the skills you possess, only how much money you make them (and they won't compensate you properly for it, either), so there's no advantage to tying your self-worth up in what you produce for them.
But I do think that it's sad we're seeing creative skills, whether writing coding composing or drawing, be devalued by AI as we are.
> For the record: I can sing well.
That is awesome! It's a great skill to have, honestly. As someone whose body tends to be falling apart more than impressing anyone, I envy that. :)
yeah i definitely enjoy the craft and love of writing boilerplate or manually correcting simple errors or looking up functions /s. i hate how it's even divided into "two camps", it's more like a big venn diagram.
Who write boilerplate this day? I just lift the code from the examples in the docs (especially css frameworks). And I love looking at functions docs, because after doing it a few times, you develop an holistic understanding of the library and your speed increases. Kinda like learning a foreign language. You can use an app to translate everything, or asks for the correct word when the needs arises. The latter is a bit frustrating at the beginning, but that’s the only way to become fluent.
Seriously, I see this claim thrown around as though everyone writes the same starting template 50 times a week. Like, if you've got a piece of "boilerplate" code you're constantly rewriting... Save It! Put it in a repo or a snippet somewhere that you can just copy-paste when you need it.
You don't need a multi-million dollar LLM to give you slightly different boilerplate snippets when you already have a text editor on your computer to save them.
i think everyone here has extremely different ideas of what AI coding actually is and it's frustrating because basically everyone is strawmanning (myself included probably), as if using it means i'm not looking at documentation or not understanding what is goin on at all times.
it's not about having the LLM write some "starter pack" toy scaffold. i means when i implement functionality across different classes and need to package that up and adapt, i can just tell the LLM how to approach it and it can produce entire sections of code that would literally just be adaptations of certain things. or to refactor certain pieces that would just be me re-arranging shit.
maybe that's not "boilerplate", but to me it's a collosal waste of my time that could be spent trying to solve a new problem. you can't package that up into a "code snippet" and it's not worth the time carefully crafting templates. LLM can do it faster, better, and cost me near nothing.
> LLM can do it faster, better, and cost me near nothing.
And this is one the thing I'm skeptical about. The above use case is a symptom of all code and no design. It is a waste of time because you're putting yourself in a corner, architecture wise. Kinda like building on a crooked foundation.
I've never done refactoring where I'm writing a lot of code, it's mostly just copy-paste and rebuilding the connection between modules (functions, classes, files, packages,...). And if the previous connections were well understood and you have a clear plan for the new design, then it's a no-brainer to get there. Same when adapting code, I'm mostly deleting lines and renaming variables (regex is nice).
Maybe I'm misunderstanding things, but unless it's for small scripts or very common project types, I haven't seen the supposed productivity gain compared to traditional tooling.
1) refactoring. copy paste, re-arrange, extract, delete and rebuild the connection. i have the mental model and tell the LLM do do it across multiple files or classes. does it way faster and exactly how i would do it given the right prompt which is just a huge file that dictates how things are structured, style, weird edge cases i encountered as time goes on.
2) new features or sections based on existing. i have a service class and want to duplicate and wire up different sections across domains. not easy enough to just be templated, but LLM can do it and understand the nuances. again, generate multiple files across classes no problem.
i can do all these things "fast". i can do them even faster when using the LLM, it offloads the tediousness and i save my brian for other tasks. alot of times i'm just researching my next thing while it chugs away. i come back, lint and review and i'm good to go.
i'm honestly still writing the majority of the code myself, esp if it's like design stuff or new features where the requirements and direction aren't as clear, but when i need to it gives me a huge boost.
keeps me in the flow, i basically recharge while continuing to code. and it's not a small script but a full fledged app, albeit very straightforward architecture wise. the gains are very real. i'm just surprised at the sentiment on HN around it. it's not even just skepticism but outright dogging on it.
I love this detailed discussion of how people are actually using LLMs for coding, and I think this rarely happens in professional spaces currently.
I do see more people who seem to be using it to replace coding skill rather than augment it, and I do worry about management's ability to differentiate between those versus just reverting to LoC. And whether it will become a demand for more code, for the same pay.
Maybe it's different mindset at play. Refactoring these is my way of recharging (because I approach as a nice puzzle to learn how to do it effectively, kinda like a break from the main problem). And the LLM workflow don't sit well with me because I dislike checking every line of generated code. Traditional tooling is deterministic, so I do the check once and move on.
Maybe all code is boilerplate for them? I use libraries and frameworks exactly for the truly boilerplate part. But I still try to understand those code I depends on, as some times I want to deviate from the defaults. Or the bug might be in there.
It’s when you try to use an exotic language, you realize the amount of work that has been done to minimize dev time in more mainstream languages.
Every PR I have to review with an obviously LLM-generated title stuffed with adjectives, and a useless description containing an inaccurate summary of the code changes pushes me a little bit more into trying to make my side projects profitable in the hope that one takes off. It usually only gets worse from there.
Documentation needs to be by humans for humans, it's not a box that's there to be filled with slop.
> The actual documentation needs to be by humans for humans.
This is true for producing the documentation but if there is an LLM that can take said documentation and answer questions about it is a great tool. I think I get the answer far quicker with LLM than sifting through documentation when looking for existence of a function in a library or a property on an object.
The documentation are for answering your questions, it’s not a puzzle to be solved. Using the reference docs assumes that you already have an understanding about the thing that is being documented and you’re looking for specificity or details. If not, the correct move is to go through a book, a tutorial, or the user guide. Aka the introductory materials.
I think that comment is conflating 2 different things: 1) people like you and I who use LLMs for exploring alternative methods to our own, and 2) people who treat LLMs like Stack Overflow answers they don't understand but trust because it's on SO.
Yes, there are tasks or parts of the code that I'm less interested in, and would happily either delegate or negotiate-off to someone else, but I wouldn't give those to a writer who happens to be able to write in a way that approximates program code, I'd give them to another dev on the team. A junior dev gets junior dev tasks, not tasks that they don't have the skills to actually perform, and LLMs aren't even at an actual junior dev level, imhe.
I noted in another comment that I've also used LLMs to get ideas for alternate ways to implement something, or to as you said "jump start" new files or programs. I'm usually not actually pasting that code into my IDE, though- I've tried that, and the work to make LLM-generated code fit into my programs is way more than just writing out the bits I need, where I need. That is clearly not the case for a lot of people using LLMs, though.
I've seen devs submitting PRs with giant blocks of clearly LLM-gen'd code, that they've tried to monkey-wrench into working with the rest of the program (often breaking conventions or secure coding standards). And these aren't junior devs, they're devs that have been working here for years and years.
When you force them to do a code review, they know it's not up to par, but there is a weird attitude that LLM-gen'd code is more acceptable to be submitted with issues than personally-written code. As though it's the LLM's fault or job to fix, even though they prompted and copied and badly-integrated and PR'd it.
And that's where I think there is a stark divide. I think you're on my side of the divide (at least, I didn't get the impression that you hate coding), it just sounds like you haven't really seen the other side.
My personal dime-store psych theory is that it's the same mental mechanism that non-technical computer users fall into of improperly trusting/ believing computers to produce correct information, but now happening to otherwise technical folks too because "AI" is still a black box technology to most of us, like computers in general are to non-techies.
LLMs are really really cool, and really really impressive to me, and I've had 'wow' moments where they did something that makes you forget what they are and how they work, but you can't let that emotional reaction towards it override the part that knows it's just a token chain. When you do, people end up (obviously on the very extreme end) 'dating' them, letting them make consequential "decisions", or just over-trusting their output/code.
I like solving problems but I hate coding. Wasting 20 minutes because you forgot a semicolon or something is not fun. AI let's me focus on the problem and not bother with the tedious coding bit.
I write code to solve problems for my own use or for my hobby electronics projects. Asking chatgpt to write a script is faster than reading the documentation of some python library.
Just last week it wrote me a whole application and gui to open a webpage at a specific time. Yeah it breaks after the first trigger but it works for what I need.
And that's OK! I'm not trying to gatekeeping anyone from the title of coder or programmer. But what is fine for quick small scripts and throwaway code can be quite bad even for smallish projects. If you're trying to solve a problem in a systematic way, there's a lot of concerns that pertain to the durability of the solution.
There's a lot of literature about these concerns and a lot of methodologies to alleviate them. I (and others) are judging LLMs in light of those concerns. Mostly because speed was never an issue for us in prototypes and scripts (and it can be relaxing to learn about something while scripting it). The issue is always reliability (can it do what I want) and maintainability (can I change it later). Performance can also be a key issue.
Aside: I don't know the exact problem you were solving, but based on the description, that could have been done with systemd timers (macOS services are more of a pain to write). Yes, there's more to learn, but time triggering some command is a problem solved (and systemd has a lot more triggers).
I started “coding” in 1986 in assembly on an Apple //e and by the time I graduated from college, I had experience with 4 different processor families - 65C02, 68K, PPC and x86. I spent the first 15 years of my career programming in C and C++ along with other languages.
Coding is just a means to an end - creating enough business value to convince the company I’m working for to give me money that I can exchange for food and shelter.
If AI can help me do that faster, I’m going to use it. Neither do I want to spend months procuring hardware and managing building out a server room (been there done that) when I can just submit some yaml/HCL and have it done for me in a few minutes.
I love coding and don't love questioning AI and checking responses.
But the simple fact is I'm much more productive with AI and I believe this is likely true for most programmers once they get adjusted.
So for production, what I love the most doesn't really matter, otherwise I'd be growing tomatoes and guiding river rafting expeditions. I'm resigned to the fact the age of manually writing "for loops" is largely over, at least in my case.
If you're using those things to do *the core function* of the program you're writing, that's an issue.
SDKs and libraries are there to provide common (as in, used repeatedly, by many) functions that serve as BUILDING BLOCKS.
If you import a library and now your program is complete, then you didn't actually make a useful program, you just made a likely less efficient interface for the library.
BUT ALSO-
SDKs and libraries are *vetted* code. The advantage you are getting isn't just about it having been written for you, it's about the hundreds of hours of human code review, iteration, and thought, that goes into those libraries.
LLM code doesn't have that, so it's not about you benefitting from the knowledge and experience of others, it's purely about reducing personally-typed LoC.
And yes, if you're wholesale copy-pasting major portions of your program from stack overflow, I'd say that's about as bad as copy-pasting from ChatGPT.
No, problem is when others are no longer needed, a machine gets to do everything, and only a few selected humans get to take care of the replicator machine.
People aren't taking LLM code and then thoughtfully refactoring and improving it, they're using it to *avoid* doing that, by treating the generated code as though it's already had that done.
That's why the pro-LLM-code people in this very thread are talking about using it to automate away the parts of the coding they don't like. You really think they're then going to go back and improve on the code past it minimally working?
There will be no advancement from that, just mediocre or bad code going unreviewed and ignored until it breaks.
After all, if we lose the joy in our craft, what exactly are we optimizing for?
Solving problems for real people. Isn't the answer here kind of obvious?
Our field has a whole ethos of open-source side projects people do for love and enjoyment. In the same way that you might spend your weekends in a basement woodworking shop without furnishing your entire house by hand, I think the craft of programming will be just fine.
Same as when higher-level languages replaced assembly for a lot of use cases. And btw, at least in places I've worked, better traditional tooling would replace a lot more headcount than AI would.
The parent is saying that when higher-level languages replaced assembly languages you only had to learn the higher level language. Once you learned the higher level language the machine did precisely what you specified and you did not have to inspect the assembly language to make sure it was compliant. Furthermore you were forced to be precise and to understand what you were doing when you were writing the higher level language.
Now you don't really have to be precise at any level to get something 'working'. You may not be familiar with the generated language or libaries but it could look good enough (like the assembly would have looked good enough). So, sure, if you are very familiar with the generated language and libraries and you inspect every line of generated code then maybe you will be ok. But often the reason you are using an LLM is because e.g. you don't understand or use bash frequently enough to get it to do what you want. Well, the LLM doesn't understand it either. So that weird bash construct that it emitted - did you read the documentation for it? You might have if you had to write it yourself.
In the end there could be code in there that nothing (machine or human) understands. The less hard-won experience you have with the target and the more time-pressed you are the more likely it is that this will occur.
Exactly. If LLMs were like higher level languages you'd be committing the prompt. LLMs are actually like auto-complete, snippets, stackoverflow and rosetta code. It's not a higher level of abstraction, it's a tool for writing code.
The output of the LLM is determined by the weights (parameters of the artificial neural network) estimated in the training as well as a pseudo-random number generator (unless its influence, called "temperature", is set to 0).
That means LLMs behave as "processes" rather than algorithms, unlike any code that may be generated from them, which is algorithmic (unless instrcuted otherwise; you could also tell an LLM to generate an LLM).
The code that the compiler generates, especially in the C realm, or with dynamic compilers is also not regular, hence the tooling constraints in high integrity computing environments.
So what? I know most compilers are deterministic, but it really only matters for reproducible builds, not that you're actually going to reason about the output. And the language makes few guarantees about the resulting instructions.
I already see this happening with low code, SaaS and MACH architectures.
What used to be a project doing a CMS backend, now is spent doing configurations on a SaaS product, and if we are lucky, a few containers/serveless for integrations.
There are already AI based products that can automate those integrations if given enough data samples.
Many believe AI will keep using current programming languages as translation step, just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around.
No, only primitive UNIX toolchains still do this, most modern compilers generate machine code directly, without having to generate Assembly text files and executing the Assembler process on it.
You can naturally revert to old ways, by asking for the Assembly manually, and call the Assembler yourself.
> Solving problems for real people. Isn't the answer here kind of obvious?
No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.
Presumably, the reason for choosing software development as the method of solving problems for people is because software development itself brings joy. Different people find joy in different aspects even of that, though.
For my part, the stuff that AI is promising to automate away is much of the stuff that I enjoy about software development. If I don't get to do that, that would turn my career into miserable drudgery.
Perhaps that's the future, though. I hope not, but if it is, then I need to face up to the truth that there is no role for me in the industry anymore. That would pretty much be a life crisis, as I'd have to find and train for something else.
"There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method."
Software development is almost unique in the scale that it operates at. I can write code once and have it solve problems for dozens, hundreds, thousands or even millions of people.
If you want your work to solve problems for large numbers of people I have trouble thinking of any other form of work that's this accessible but allows you to help this many others.
Fields like civil engineering are a lot harder to break into!
> That would pretty much be a life crisis, as I'd have to find and train for something else.
There's inertia in the industry. It's not like what you're describing could happen in the blink of an eye. You may well be at the end of your career when this prophecy is fulfilled, if it ever comes true. I sure will be at the end of mine and I'll probably work for at least another 20 years.
The inertia argument is real, and I would compare it to the mistaken believe of some at IBM in the 1970s that SQL would be used by managers to query relational databases directly, so no programming was needed anymore.
And what happened? Programmers make the queries and embed them into code that creates dashboards that managers look at. Or managers ask analysts who have to interpret the dashboards for them... It rather created a need for more programmers.
Compare embedded SQL with prompts - SQL queries compared to assembler or FORTRAN code is closer to English prose for sure.
Did it take some fun away? Perhaps, if manually traversing a network database is fun to anyone, instead of declaratively specifying what set of data to retrieve. But it sure gave new fun to people who wanted to see results faster (let's call them "designers" rather than "coders"), and it made programming more elegant due to the declarativity of SQL queries (although that is cancelled out again by the ugliness of mixing two languages in the code).
Maybe the question is: Does LLM-based coding enable a new kind of higher level "design flow" to replace "coding flow"? (Maybe it will make a slightly different group of people happy?)
This echoes my sentiment that LLMs are higher level programming languages. And, as every layer of abstraction, they add assumptions that may or may not fit the use case. The same way we optimize SQL queries by knowing how the database makes a query plan, we need to optimize LLM outputs, specially when the assumptions given are not ideal.
> No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.
I don't see why we should seek an explanation if there are thousands of ways to be useful to people. Is being a lawyer particularly better than being an accountant?
I'm probably just not as smart or creative as you, but say my problem is I have a ski cabin that I want to rent it to strangers for money. Nevermind a thousand, What are 100 ways without using software that I could do something about that, vs listing it on Airbnb?
I was speaking about solving people's problems generally. It's easy to find specific problems that are best addressed with software, just as it's easy to find specific problems that can't be addressed with software.
solving real problems is the core of it, but for a lot of people the joy and meaning come from how they solve them too. the shift to AI tools might feel like outsourcing the interesting part, even if the outcome is still useful. side projects will stick around for sure, but i think it's fair to ask what the day-to-day feels like when more of it becomes reviewing and prompting rather than building.
> Solving problems for real people. Isn't the answer here kind of obvious?
Look at the majority of the tech sector for the last ten years or so and tell me this answer again.
Like I guess this is kind of true, if "problems for real people" equals "compensating for inefficiencies in our system for people with money" and "solutions" equals "making a poor person do it for them and paying them as little as legally possible."
Those of us who write software professionally are literally in a field premised on automating other people's jobs away. There is no profession with less claim to the moral high ground of worker rights than ours.
I often think about the savage job-destroying nature of the open source community: hundreds of thousands of developers working tirelessly to unemploy as many of their peers as possible by giving away the code they've written for free.
(Interesting how people talk about AI destroying programming jobs all the time, but rarely mention the impact of billions of dollars of code being given away.)
> Those of us who write software professionally are literally in a field premised on automating other people's jobs away.
How true that is depends on what sort of software you write. Very little of what I've accomplished in my career can be fairly described as "automating other people's jobs away".
You're automating the 1's and 0's. There could be millions of people in an assembly like line of buttons, being paid minimum wage to press either the 1 or 0 button to eventually trigger the next operation.
Haven't we been automating jobs away since the industrial revolution? I know AI may be an exception to this trend, but at least with classical programming, demand goes up, GDP per capita goes up, and new industries are born.
I mean, there's three ways to get stuff done: do it yourself, get someone else to do it, or get a machine to do it.
#2 doesn't scale, since someone still has to do it. If we want every person to not be required to do it (washing, growing food, etc), #3 is the only way forward. Automation and specialization have made the unthinkable possible for an average person. We've a long way to go, but I don't see automation as a fundamentally bad thing, as long as there's a simultaneous effort to help (especially those who are poor) transition to a new form of working.
What is qualitatively different this time is that it affects intellectual abilities - there is nothing higher up in the work "food chain". Replacing physical work you could always argue you'd have time to focus on making decisions. Replacing decision making might mean telling people go sit on the beach and take your universal basic income (UBI) cheque, we don't need you anymore.
Sitting on the beach is not as nice as it sounds for some; if you don't agree, try doing it for 5 years. Most people require work to have some sense of purpose, it gives identity, and it structures their time.
Furthermore, if you replaced lorry drivers with self-driving cars, you'd destroy the most commonly held job in North America as well as South America, and don't tell me that they can be retrained to be AI engineers or social media influencers instead (some can only be on the road, some only want to be on the road).
I agree that we have been able to automate a lot of jobs, but it's not like intellectual jobs have completely replaced physical labor. Electricians, phlebotomists, linemen, firefighters, caregivers, etc, etc, are jobs that current AI approaches don't even scratch. I mean, Boston dynamics has barely been able to get a robot to walk.
So no, we don't need to retrain them to be AI engineers if we have an active shortage of electricians and plumbers. Now, perhaps there aren't enough jobs—I haven't looked at exact numbers—but we still have a long ways to go before I think everything is automated.
Everything being slop seems to be the much more likely issue in my eyes[1].
But... My local library has a job searching program? I have a friend who's learning masonry at a government sponsored training program? It seems the issue is not that resources don't exist, but that these people don't have the time to use them. So it's unfair to say they don't exist. Rather, it seems they're structured in an unhelpful way for those who are working double jobs, etc.
I see capitalism invoked as a "boogey man" a lot, which fair enough, you can make an emotional argument, but it's not specific enough to actually be helpful in coming up with a solution to help these people.
In fact, capitalism has been the exact thing that has lifted so many out of poverty. Things can be simultaneously bad and also have gotten better over time.
I would argue that the biggest issue is education, but that's another tangent...
> So it's unfair to say they don't exist. Rather, it seems they're structured in an unhelpful way for those who are working double jobs, etc.
I'll be sure to alert the next person I encounter working UberEats for slave wages that the resources exist that they cannot use. I'm sure this difference will impact their lives greatly.
Edit: My point isn't that UberEats drivers make slave wages (though they do): My point is that from the POV of said people and others who need the aforementioned resources, whether they don't exist or exist and are unusable is fucking irrelevant.
Slave wages? Like the wages for a factory worker in 1918[1]? $1300 after adjusting for inflation. And that was gruelling work from dawn to dusk, being locked into a building, and nickel and dimed by factory managers. (See the triangle shirtwaist factory). The average Uber wage is $20/hour[2]. Say they use 2 gallons of gas (60 mph at 30 mpg) at $5/gallon. That comes out to $10/hour, which is not great, but they're not being locked into factories and working from dawn to dusk and being fired when sick. Can you not see that this is progress? It's not great, we have a lot of progress to make, but it sure beats starving to death in a potato famine.
> Slave wages? Like the wages for a factory worker in 1918[1]? $1300 after adjusting for inflation.
I think they were using “slave wages” as a non-literal relative term to the era.
As you did.
A hundred years before your example, the “slave wages” were actually slave wages.
I think it’s fair to say a lot of gig workers, especially those with families, are having a very difficult time economically.
I expect gig jobs lower unemployment substantially, due to being convenient and easy to get, and potentially flexible with hours, but they lower average employment compensation.
> I think it’s fair to say a lot of gig workers, especially those with families, are having a very difficult time economically.
Great point. I wonder if this has to do with the current housing crisis and cost of utilities... Food has never been more affordable, in fact free with food banks and soup kitchens. But (IMHO) onerous zoning has really slowed down development and driven up prices.
Another cost is it's pretty much impossible to do anything without a smartphone and internet. I suppose libraries have free internet, but being able to get to said library is another issue.
And like you said, contract work trades flexibility for benefits, and that gets exploited by these companies.
I guess it just sucks sometimes because these issues are super hairy (shut down Uber, great, now you've just put everyone out of a job). "For every complex problem there is a solution which is clear, simple, and wrong."
Replying to your edit: it is relevant, because it means people are trying but it isn't working. When people aren't trying, you have to get people to start trying. When people are trying but it isn't working, you have to help change the approach. Doubling down on a failing policy (e.g. we just need to create more resources) is failing to learn from the past.
At some point, you've stopped participating in good faith with the thread and are instead trying to push it towards some other topic; in your case, apparently, a moral challenge against Uber. I think we get it; can you stop supplying superficial rebuttals to every point made with "but UberEats employs [contracts] wave slaves"?
> Those of us who write software professionally are literally in a field premised on automating other people's jobs away.
Depends what you write. What I work on isn't about eliminating jobs at all, if anything it creates them. And like, actual, good jobs that people would want, not, again, paying someone below the poverty line $5 to deliver an overpriced burrito across town.
> "Computers" used to be people! Literally, people.
Not always. Recruitment budgets have limits, so it's a fixed number of employees either providing services to a larger number of customers thanks to software, or serving fewer customers or do so less often without the software.
Thank you for the link, the reference you're making slipped past me. That said, I think my point still holds: software doesn't always have to displace workers, it can also help current employees scale their efforts when bringing on more people isn't possible.
That's fine; read it as me speaking to the whole thread, not challenging you directly. Technology drives economic productivity; increasing economic productivity generally implies worker displacement. That workers come out ahead in the long run (they have in the past; it's obviously not a guarantee) is besides my point. Software is automating software development away, the same way it automated a huge percentage of (say) law firm billable hours away. We'd better be ready to suck it up!
Got it, you're talking about workers getting ahead as a category -- no objections to that.
I doubt the displaced computers managed to find a better job on average. Probably even their kids were disadvantaged since the parents had fewer options to support their education.
So, who knows if this specific group of people and their descendants ever fully recovered let alone got ahead.
My argument is explicitly not premised on the claim that productivity improvements reliably work out to the benefit of existing workers. It's that practicing commercial software developers are agents of economic productivity, whether anticapitalist developers are happy about that or not, and have really no moral standing to complain about their jobs (or the joy in those jobs) being automated away. That's what increased economic productivity means: more getting done with less labor.
Can't relate at all. I've never had so much fun programming as I have now. All the boring and tedious parts are gone and I can finally focus on the code I love to write.
I don't know man, maybe prompt most of your work, eyeball it and verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!), run a script to commit and push after 3 hours and then... work on whatever code makes you happy without using an LLM?
Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have. I have started deriving a lot of value from those LLMs I chose to interact with by specifying clear boundaries on what's the priority and what can wait for later and what should be completely ignored due to this or that objective (and a number of other parameters I am giving them). When you do that well, they are extremely useful.
I see this "prompting is an art" stuff a lot. I gave Claude a list of 10 <Route> objects and asked it to make an adjustment to all of them. It gave me 9 back. When I asked it to try again it gave me 10 but one didn't work. What's "prompt engineering" there, telling it to try again until it gets it right? I'd rather just do it right the first time.
I am also barely using LLMs at the moment. Even 10% of the time would be generous.
What I was saying is that I have tried different ways of interacting with LLMs and was happy to discover that the way I describe stuff to another senior dev actually works quite fine with an LLM. So I stuck to that.
Again, if an LLM is not up to your task, don't waste your time with it. I am not advocating for "forget everything you knew and just go ask Mr. AI". I am advocating for enabling and productivity-boosting. Some tasks I hate, for some I lack the deeper expertise, others are just verbose and require a ton of typing. If you can prompt the LLM well and vet the code yourself after (something many commenters here deliberately omit so they can happily tear down their straw man) then the LLM will be a net positive.
It's one more tool in the box. That's all there is to it really. No idea why people get so polarizing.
Prompt engineering is just trying that task on a variety of models and prompt variations until you can better understand the syntax needed to get the desired outcome, if the desired outcome can be gotten.
Honestly you’re trying to prove AI is ineffective by telling us it didn’t work with your ineffective protocol. That is not a strong argument.
What should I have done there? Tell it to make sure that it gives me all 10 objects I give it back? Tell it to not put brackets in the wrong place? This is a real question --- what would you have done?
How long ago was this? I'd be surprised to see Claude 3.7 Sonnet make a mistake of this nature.
Either way, when a model starts making dumb mistakes like that these days I start a fresh conversation (to blow away all of the bad tokens in the current one), either with that model or another one.
I often switch from Claude 3.7 Sonnet to o3 or o4-mini these days. I paste in the most recent "good" version of the thing we're working on and prompt from there.
A full two thirds of the comment you replied to there were me saying "when these things start to make dumb mistakes here are the steps I take to fix the problem".
You should have dropped the LLM, of course. They are not replacing us the programmers anytime soon. If they can be used as an enabler / booster, cool, if not, back to business as usual. You can only win here. You can't lose.
* experiment with multiple models, preferably free high quality models like Gemini 2.5. Make sure you're using the right model, usually NOT one of the "mini" varieties even if its marketed for coding.
* experiment with different ways of delivering necessary context. I use repomix to compile a codebase to a text file and upload that file. I've found more integrated tooling like cursor, aider, or copilot, are less effective then dumping a text file into the prompt
* use multi-step workflows like the one described [1] to allow the llm to ask you questions to better understand the task
* similarly use a back-and-forth one-question-at-a-time conversation to have the llm draft the prompt for you
* for this prompt I would focus less on specifying 10 results and more about uploading all necessary modules (like with repomix) and then verifying all 10 were completed. Sometimes the act of over specifying results can corrupt the answer.
I'm a pretty vocal AI-hater, partly because I use it day to day and am more familiar with its shortfalls - and I hate the naive zealotry so many pro-AI people bring to AI discussions. BUTTT we can also be a bit more scientific in our assessments before discarding LLMs - or else we become just like those naive pro-AI-everything zealots.
Each activity we engage in has different use, value, and subjective enjoyment to different people. Some people love knitting! Personally, I do know how to sew small tears, which is more than most people in the US these days.
Just because I utilize the services of others for some things does not mean that it should be expected I want to utilize the service of others for all things.
This is a preposterous generalization and exactly why I said the OP premise is laughable.
Further, you’ve shifted OP’s point from subjective enjoyment of an activity to getting “paid well” - this is an irrelevant tangent to whether “most” people in general would delegate work if they could.
> most of us would delegate our work code to somebody else or something else if we could.
I saw your objections to other comments on the basis of them seemingly not having a disdainful attitude towards coding they do for work, specifically.
I absolutely do have tasks, coding included, that I don't want to do, and find no joy in. If I can have my manager assign the task to someone else, great! But using an LLM isn't that, so I'm still on the hook for ensuring all the most boring parts of that task (bugfixing, reworks, integration, tests, etc) get done.
My experience with LLMs is that they simply shift the division of time away from coding, and towards all the other bits.
And it can't possibly just be about prompting. How many hundreds of lines of prompting would you need to get an LLM to understand your coding conventions, security baselines, documentation reqs, logging, tests, allowed libraries, OSS license restrictions (i.e. disallowed libraries), etc? Or are you just refactoring for all that afterwards?
Maybe you work somewhere that doesn't require that level of rigor, but that doesn't strike me as a good thing to be entrenching in the industry by increasing coders' reliance on LLMs.
A super necessary context here is that I barely use LLM at all still. Maybe I should have said so but I figured that too much nuance would ruin a top-level comment and mostly casually commented on a tradeoff of using or not using LLMs.
Where I use LLMs:
1. Super boring and annoying tasks. Yes, my prompts for those include various coding style instructions, requests for small clarifying comments where the goal of the code is not obvious, tests. So, no OSS license restrictions. Libraries I specify most of the times I used LLMs (and only once did I ask it to suggest a library). Logging and telemetry I add myself. So long story short, I use the LLM to show me a draft of a solution and then mercilessly refactor it to match my practices and guidelines. I don't do 50 exchanges out of laziness, no.
2. Tasks where my expertise is lacking. I recently used an LLM to help me with making a `.clone()`-heavy Rust code to become nearly zero-copy for performance reasons -- it is a code on a hot path. As much as I love Rust and I am fairly good at it (realistically I'm IMO at 7.5 / 10), all the lifetimes and zero-copy semantics I still don't know yet. A long session with an LLM after, I emerged both better educated and with a faster code. IMO a win-win.
Nice. I've got a whole lot of magical things that I need built for my day job. Want to connect so I can hand the work over to you? I'll still collect the paychecks, but you can have the joy. :)
> Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
I don’t think this is the case, if anything the opposite is true. Most of us would like to do the work code but have realized, at some career point, that you’re paid more to abstract yourself away from that and get others to do it either in technical leadership or management.
> I don’t think this is the case, if anything the opposite is true
I'll be a radical and say that I think it depends and is very subjective.
Author above you seems to enjoy working on code by itself. You seem to have a different motivation. My motivation is solving problems I encounter, code just happen to be one way out of many possible ones. The author of the submission article seems to love the craft of programming in itself, maybe the problem itself doesn't even matter. Some people program just for the money, and so on.
Well, does not help that a lot of work tasks are meaningless drudgery that we collectively should have trivialized and 100% automated at least 20 years. That was kind of the core my point: a lot of work tasks are just plain BS.
I wouldn't, I got into software exactly because I enjoy solving problems and writing code. Verifying shitty, mindless, computer generated code is not something I would consider doing for all the money in the world.
1. I work on enjoyable problems after I let the LLM do some of the tasks I have to do for money. The LLM frees me bandwidth for the stuff I truly love. I adore solving problems with code and that's not going to change ever.
2. Some of the modern LLMs generate very impressive code. Variables caching values that are reused several times, utility functions, even closure helpers scoped to a single function. I agree that when the LLM code's quality falls bellow a certain threshold then it's better in every way to just write it yourself instead.
Needlessly polarizing. I love coding since 12 years old (so more than 30 years at this point) but most work tasks I'm given are fairly boring and uninteresting and don't move almost any science or knowledge forward.
Delegating part of that to an LLM so I can code the stuff I love is a big win for my motivation and is making me doing the work tasks with a bit more desire and pleasure.
Please don't forget that most of us out there can't code for money anything that their heart wants. If you can, I'd be happy for you (and envious) but please understand that's also a fairly privileged life you'd be having in that case.
You don't. That's why you don't use an LLM most of the time. I was talking about cases where either the tasks were too boring or required an expertise that I didn't have at the time.
Good question. I run it by the docs that intimidated me before. Because I did not ask the LLM for the code only; I asked it to fully explain what did it change and why.
Absolutely. That's why I don't give the LLM the reins for long, nor do I tell it to do the whole thing. I want to keep my mind sharp and my abilities honed.
> Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have
It requires magical incantations that may or may not work and where a missing comma in a prompt can break the output just as badly as the US waking up and draining compute resources.
Yeah, I think that's pretty common. It took me 15+ years of my own career before I got over my aversion to spending significant amounts of time reading through code that I didn't write myself.
We all do. But more often than not we have to learn to do surgical incisions in order to do our task for the day. It's what truly distinguishes a professional.
Totally. And yet rigorous proof is very difficult. Having done some mathematics involving nontrivial proofs, I respect even more how difficult rigor is.
Ah, I absolutely don't verify code in the mathematical sense of the word. More like utilize strong static typing (or hints / linters in weaker typed languages) and write a lot of tests.
Nothing is truly 100% safe or free of bugs. What I meant with my comment up-thread was that I have enough experience to have a fairly quick and critical eye of code, and that has saved my skin many times.
How did you get there from me agreeing 100% with someone who said that you should be ready to verify everything an LLM does for you and if you're not willing to do that you shouldn't use them at all?
Do you ever read my comments, or do you just imagine what I might have said and reply to that?
> work on whatever code makes you happy without using an LLM?
This isn't how it works, psychologically. The whole time I'm manual coding, I'm wondering if it'd be "easier" to start prompting. I keep thinking about a passage from The Road To Wigan Pier where Orwell addresses this effect as it related to the industrial revolution:
>Mechanize the world as fully as it might be mechanized, and whichever way you turn there will be some machine cutting you off from the chance of working—that is, of living.
>At a first glance this might not seem to matter. Why should you not get on with your ‘creative work’ and disregard the machines that would do it for you? But it is not so simple as it sounds. Here am I, working eight hours a day in an insurance office; in my spare time I want to do something ‘creative’, so I choose to do a bit of carpentering—to make myself a table, for instance. Notice that from the very start there is a touch of artificiality about the whole business, for the factories can turn me out a far better table than I can make for myself. But even when I get to work on my table, it is not possible for me to feel towards it as the cabinet-maker of a hundred years ago felt towards his table, still less as Robinson Crusoe felt towards his. For before I start, most of the work has already been done for me by machinery. The tools I use demand the minimum of skill. I can get, for instance, planes which will cut out any moulding; the cabinet-maker of a hundred years ago would have had to do the work with chisel and gouge, which demanded real skill of eye and hand. The boards I buy are ready planed and the legs are ready turned by the lathe. I can even go to the wood-shop and buy all the parts of the table ready-made and only needing to be fitted together; my work being reduced to driving in a few pegs and using a piece of sandpaper. And if this is so at present, in the mechanized future it will be enormously more so. With the tools and materials available then, there will be no possibility of mistake, hence no room for skill. Making a table will be easier and duller than peeling a potato. In such circumstances it is nonsense to talk of ‘creative work’. In any case the arts of the hand (which have got to be transmitted by apprenticeship) would long since have disappeared. Some of them have disappeared already, under the competition of the machine. Look round any country churchyard and see whether you can find a decently-cut tombstone later than 1820. The art, or rather the craft, of stonework has died out so completely that it would take centuries to revive it.
>But it may be said, why not retain the machine and retain ‘creative work’? Why not cultivate anachronisms as a spare-time hobby? Many people have played with this idea; it seems to solve with such beautiful ease the problems set by the machine. The citizen of Utopia, we are told, coming home from his daily two hours of turning a handle in the tomato-canning factory, will deliberately revert to a more primitive way of life and solace his creative instincts with a bit of fretwork, pottery-glazing, or handloom-weaving. And why is this picture an absurdity—as it is, of course? Because of a principle that is not always recognized, though always acted upon: that so long as the machine is there, one is under an obligation to use it. No one draws water from the well when he can turn on the tap. One sees a good illustration of this in the matter of travel. Everyone who has travelled by primitive methods in an undeveloped country knows that the difference between that kind of travel and modern travel in trains, cars, etc., is the difference between life and death. The nomad who walks or rides, with his baggage stowed on a camel or an ox-cart, may suffer every kind of discomfort, but at least he is living while he is travelling; whereas for the passenger in an express train or a luxury liner his journey is an interregnum, a kind of temporary death. And yet so long as the railways exist, one has got to travel by train—or by car or aeroplane. Here am I, forty miles from London. When I want to go up to London why do I not pack my luggage on to a mule and set out on foot, making a two days of it? Because, with the Green Line buses whizzing past me every ten minutes, such a journey would be intolerably irksome. In order that one may enjoy primitive methods of travel, it is necessary that no other method should be available. No human being ever wants to do anything in a more cumbrous way than is necessary. Hence the absurdity of that picture of Utopians saving their souls with fretwork. In a world where everything could be done by machinery, everything would be done by machinery. Deliberately to revert to primitive methods to use archaic took, to put silly little difficulties in your own way, would be a piece of dilettantism, of pretty-pretty arty and craftiness. It would be like solemnly sitting down to eat your dinner with stone implements. Revert to handwork in a machine age, and you are back in Ye Olde Tea Shoppe or the Tudor villa with the sham beams tacked to the wall.
>The tendency of mechanical progress, then, is to frustrate the human need for effort and creation. It makes unnecessary and even impossible the activities of the eye and the hand. The apostle of ‘progress’ will sometimes declare that this does not matter, but you can usually drive him into a comer by pointing out the horrible lengths to which the process can be carried.
This article resonates with me like no other has in years. I very recently retired after 40 years writing software because my role had evolved into a production-driven limbo. For the past decade I have scavenged and copied other peoples' code into bland cookie cutter utilities that fed, trained, ran, and summarized data mining ops. It has required not one whit of creative expression or 'flow', making my life's work as dis-engaging as that of... well... the most bland job you can imagine.
AI had nothing to do with my own loss of engagement, though certainly it won't cure what ailed me. In fact, AI promises to do to all of software development what the mechanized data mining process did to my sense of creative self-expression. It will squeeze all the fun out of it, reducing the joy of coding (and its design) to plug-and-chug, rinse, repeat.
IMHO the threat of AI to computer programming is not the loss of jobs. It's the loss of personal passionate engagement in the craft.
I’ve been struggling with a very similar feeling. I too am a manager now. Back in the day there was something very fulfilling about fully understanding and comprehending your solution. I find now with AI tools I don’t need to understand a lot. I find the job much less fulfilling.
The funny thing is I agree with other comments, it is just kind of like a really good stack overflow. It can’t automate the whole job, not even close, and yet I find the tasks that it cannot automate are so much more boring (the ones I end up doing).
I envy the people who say that AI tools free them up to focus on what they care about. I haven’t been able to achieve this building with ai, if anything it feels like my competence has decreased due to the tools. I’m fairly certain I know how to use the tools well, I just think that I don’t enjoy how the job has evolved.
It's 9am in the morning. I login to my workstation and muddle my way through the huge enterprise code base which doesn't fit into any model context window for the AI tool to be useful (and even if it did, we can't use any random model due to compliance and proprietary and whatnot).
I have thousands deadlines which are suddenly coming due and a bunch of code which is broken because some poor soul under the same pressure put something that "works" in. And it worked, until it didn't, and now it's my turn in the barrel.
Is this the joy?
I'm not complaining, I'm doing it for the good money.
When we outsource the parts of programming that used to demand our complete focus and creativity, do we also outsource the opportunity for satisfaction? Can we find the same fulfillment in prompt engineering that we once found in problem-solving through code?
Most of AI-generated programming content I use are comments/explanations for legacy code, closely followed by tailored "getting started" scripts and iterations on visualisation tasks (for shitty school assignments that want my pyplots to look nice). The rest requires an understanding, which AI can help you achieve faster (it's read many a book related to the topic, so it can recall information a lot like an experienced colleague may), but it can't confer capital K Knowledge or understanding upon you. Some of the tasks it performs are grueling, take a lot of time to do manually, and provide little mental stimulation. Some may be described as lobotomizing and (in my opinion) may mentally damage you in the "Jack Torrance typewriter" kinda way.
It makes me able to work on the fun parts of my job which possess the qualities the article applauds.
I always thought about the problem of AI taking jobs, that even if there are new jobs created to replace the older ones, it will come at a cost of decrease in satisfaction of overall populace.
The more people in general get disconnect from nature/physical world/reality. via layers of abstraction the more discontent they will become.
These layers can be:
1) Automatics in agriculture.
2) Industries.
3) Electronics
4) Software
5) and now AI
Each higher layer depends on lower ones for its functioning without the need to worry about specifics and provides a framework for higher abstraction to work on.
The more we move up in hierarchy the more disconnected we become from the physical world.
To support this I observed that villagers in general are more jolly and content than city dwellers. In metropolis specially I saw that people are more rude, anxious and always agitated, while villagers are welcoming and peaceful.
Another good example is that of an artist finding it boring to guide AI even though he loves making paintings himself/herself.
I've been singin' this song for years. We should return to Small Data. Hand picked, locally sourced, data. Data I can buy at a mom and pop shop. Data I can smell, data I can feel, data I can yearn for.
I am mostly pretty underwhelmed with LLMs' code, but this is a use-case that makes perfect sense to me, and seems like a net-positive: using them as a reference manual/ translator/ training aid.
I just wish I saw more people doing this, rather than asking them to 'draw 80% of the owl'.
There is craft in business, in product, and in engineering.
A lot of these discussions focus on craft in engineering and there's lots of merit there regarding AI tools and how they change that process, but I've found that folks who enjoy both the product side of things and the engineering side of things are thriving while those who were very engineering focused understandably feel apprehensive.
I will say, in my day job, which is often at startups, I have to focus more on the business / product side just given the phase of the company. So, I get joy from engineering craft in side projects or other things I work on in my own time to scratch the itch.
> After all, if we lose the joy in our craft, what exactly are we optimizing for?
For being one of the few lucky ones that gets to stay around taking care of the software factory robots, or designing them, while everyone else that used to work at the factory is now queueing somewhere else.
For me the most surprising part is the phase of wonder, from those that apparently never read anything in the history of industrial revolution, and think everyone will still have a place when we achieve Star Trek replicator level.
The author is already an experienced programmer. Let me toss in an anecdote about the next generation of programmers. Vibe coding: also called playing pinball with the AI, hoping something useful comes out.
I taught a lecture in my first-semester programming course yesterday. This is in a program for older students, mostly working while going back to school. Each time, a few students are selected to present their code for an exercise that I pick randomly from those they were assigned.
This guy had fancy slides showing his code, but he was basically just reading the code off the page. So I ask him: “hey, that method you call, what exactly does it do?”.
Um…
So I ask "Ok, the result from that method is assigned to a variable. What kind of variable is it?" Note that this is Java, the data type is explicitly declared, so the answer is sitting there on his slide.
Um…
So I tear into him. You got this from ChatGPT. That’s fine, if you need the help, but you need to understand what you get. Otherwise you’ll never get a job in IT.
His answer: “I already have a job in IT.”
Fsck. There is your vibe coder. You really do not want them working on anything that you care about.
This is one of the biggest dangers imo. While I agree with the OP about the deflation of joy in experienced programmers, the related but more consequential effect seems to be dissuading people from learning. A generational threat to collective competence and a disservice to students and teachers everywhere
Does your course not have exams or in-lab assignments? Should sort itself out. Honestly, I'm all for homework fading away as professors can't figure out how to prevent people from using AI. It used to be the case that certain kids could get away with not doing much because they were popular enough to get people to let them copy their assignments (at least for certain subjects). Eventually the system will realize they can't detect AI and everything has to be in-person.
Sure, this guy is likely to fail the course. The point is: he is already working in the field. I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
> I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
That could be considered malpractice. I know our profession currently doesn't have professional standards, but it's just a side effect of it being very new and not yet solidified; it won't be long until some duty of care becomes required, and we're already starting to see some movement in that direction, with things like the EU CRA.
So long as your experience and skill allows you to produce work of higher quality than average for your industry, then you will always have a job which is to review that average quality work, and surgically correct it when it is wrong.
This has always been true in every craft, and it remains true for programmers in a post LLL world.
Most training data is open source code written by novice to average programmers publishing their first attempts at things and thus LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience.
Honestly to most programmers early in their career right now, I would suggest spending more time reviewing code, and bugfixes, than writing code. Review is the skillset the industry needs most now.
But you will need to be above average as a software reviewer to be employable. Go out into FOSSland and find a bunch of CVEs, or contribute perf/stability/compat fixes, proving you review and improve things better than existing automated tools.
Trust me, there are bugs -everywhere- if you know how to look for them and proving you can find them is the resume you need now.
The days of anyone that can rub two HTML tags together having a high paying job are over.
> LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience
The one time i pasted LLM code without reviewing it it belonged on accidentally quadratic.
It was obvious at first read, but probably not for a beginner. The accidental complexity was hidden behind API calls that weren't wrong, just grossly inefficient.
Problem might be, if you lose the "joy" and the "flow" you'll stop caring about things like that. And software is bloated enough already.
In my case, I couldn't agree more, with the premise of the article, but my life today, is centered around writing software the very best that I can; regardless of value or price.
It's not very effective, if I were to be trying to make a profit.
It's really hard to argue for something, if the something doesn't result in value, as perceived by others.
For me, the value is the process. I often walk away from my work, once I have it up and shipping. I do like to take my work all the way through shipping, support, and maintenance, but find that my eye is always drawn towards new shores[0].
“A ship in harbor is safe, but that is not what ships are built for.”
–John A. Shedd
Honestly, most of the "real engineer" rhetoric is exhausting. Here's the thing: the people most obsessed with software craftsmanship, pattern orthodoxy, and layered complexity often create some of the most brittle, hostile, constantly mutating systems imaginable. You may be able to build abstractions, but if you're shipping stuff that users have to re-learn every quarter because someone needed to justify a promotion via another UI revamp or tech stack rewrite, you're not designing well. You're just changing loudly.
Also, stop gatekeeping AI tooling like it’s cheating. We’re not in a craft guild. The software landscape is full of shovelware and half-baked “best practices” that change more often than a JavaScript framework’s logo. I'm not here to honor the tradition of suffering through YAML hell or memorizing the 400 ways to configure a build pipeline. I’m here to make something work well, fast, and that includes leveraging AI like the power tool it is.
So yeah, you can keep polishing the turd pile of over-engineered “real” systems. The rest of us will be using AI to build, test, and ship faster than your weekly stand-up even finishes.
I will start by saying I don't have much experience with the latest AI coding tools.
From what I've seen using them would lead to more boredom. I like solving problems. I don't like doing code reviews. I wouldn't trust any AI generated code at this stage without reviewing it. If I could swap that around so I write code and AI gives me a reasonable code review and catches my mistakes I'd be much more interested.
I would argue that the vast majority of challenges I have had in my (very long) tech career were not technical challenges anyway, rather they were "people" problems (e.g., extracting the actual requirements and maintaining scope stability).
Would you be happier and feel more flow if you were typing in assembly? What about hand-punching cards? To me this reads more as nostalgia than a genuine concern. Tools are always increasing in abstraction, but there’s no reason you can’t achieve flow with new tools. Learning to prompt is the new learning to type.
I think.. based on recent events.. that some of the corporate inefficiencies are very poorly captured. Last year we had an insane project that was thrown at us before end of the year, because, basically, company had a tiff with the vendor and would rather have us spend our time in meetings trying to do what they are doing rather than pay vendor for that thing. From simple money spent perspective, one would think company's simple amoral compass would be a boon.
AI coding is similar. We just had a minor issue with ai generated code that was clearly not vetted as closely as it should have been making output it generated over a couple of months not as accurate as it should be. Obviously, it had to be corrected, then vetted and so on, because there is always time to correct things...
edit: What I am getting at is the old-fashioned, penny smart, but pound foolish.
Typing isn't the fun part of it for me. It's a necessary evil to realize a solution.
The fun part of being an engineer for me is figuring out how it all should work and fit together. Once that's done - I already basically have all of the code for the solution in my head - I've just got to get it out through my fingers and slog through all the little ways it isn't quite right, doesn't satisfy x or y best practice, needs to be reshaped to accommodate some legacy thing it has to integrate that is utterly uninteresting to me, etc.
In the old model, I'd enjoy the first few hours or days of working on something as I was designing it in my mind, figuring out how it was all going to work. Then would come the boring part. Toiling for days or weeks to actually get all the code just so and closing that long-tail gap from 90% done (and all interesting problems solved) to 100% done (and all frustrating minutia resolved).
AI has dramatically reduced the amount of time the unsatisfying latter part of a given effort lasts for me. As someone with high-functioning ADD, I'm able to stay in the "stimulation zone" of _thinking_ about the hard / enjoyable part of the problem and let AI do (50-70%, depending on domain / accuracy) of the "typing toil".
Really good prompts that specify _exactly_ what I want (in technical terms) are important and I still have to re-shape, clean up, correct things - but it's vastly different than it was before AI.
I'm seeing on the horizon an ability to materialize solutions as quickly as I can think / articulate - and that to me is very exciting.
I will say that I am ruthlessly pragmatic in my approach to development, focusing on the most direct solution to meet the need. For those that obsesses over beautiful, elegant code - personalizing their work as a reflection of their soul / identity or whatever, I can see how AI would suck all the joy from the process. Engineering vs. art, basically. AI art sucks and I expect that's as true for code as it is for anything else.
The things I'm usually tabbing through in cursor are not the things that make me feel a lot of enjoyment in your work. The things that are most enjoyable are usually the system level design aspects, the refactorings to make things work better. These you can brainstorm with AI, but cannot delegate to AI today.
The rest is glorified boilerplate that I find usually saps me of my energy, not gives me energy. I'm a fan of anything that can help me skip over that and get to the more enjoyable work.
I found myself recently making decent superficial progress only to introduce a bug and had a system crash (unusual bc it’s python) bc I didn’t really understand how the package worked (bc I bypassed the docs for the AI examples). It did end up working out ok - I then went into the weeds and realised the AI has given me two examples that worked in isolation but not together - inconsistent API calls essentially. I do like understanding what I’m doing as much or more than getting it done, bc it always comes back to you, sooner or later.
The post focuses on flow, but depending on what you mean by it, it isn't necessarily a good thing. Trying to solve something almost too difficult usually gets you out of flow. You still need concentration, though.
My main worry about AI is that people just keep using the garbage that exists instead of trying to produce something better, because AI takes away much of the pain of interacting with garbage. But most people are already perfectly fine using garbage, so probably not much will change here.
As a scientist, I actually greatly enjoy the AI assisted coding because it can help with the boring/tedious side of coding.
I.e. I occasionally have some new ideas/algorithms to try, and previously I did not have enough time to explore them out, because there was just too much boring code to be written. Now this part is essentially solved, and I can more easily focus on key algorithms/new ideas.
Funny that I found this article going to hacker news as a pause in my work : I had to chose between using Aider or my brain to code a small algorithmic task, sorting items of a list based on dependences between items written in a YAML file.
Using Aider would probably solve the task in 5 minutes. Coding it in 30 minutes. The former choice would result in more time for other tasks or reading HN or having a hot beverage or walking in the sun. The second would challenge my rusting algorithmic skills and give me a better understanding of what I'm doing for the medium term.
Hard choice. In any case, I have a good salary, even with the latter option I can decide to spend good times.
I've tried getting different AIs to say something meaningful about code, never got anything of value back so far. They can't even manage tab-completion well enough to be worth the validation effort for me.
Yeah I wonder how do the code look after such professional AI development. I tried ChatGPT 1o to ask it about simple C function - what errors are there. It answered only after I directly asked about the aspects I was expecting it to tell about. It means that if I didn't know that the LLM wouldn't tell me...
> Fast forward to today, and that joy of coding is decreasing rapidly. Well, I’m a manager these days, so there’s that… But even when I do get technical, I usually just open Cursor and prompt my way out of 90% of it. It’s way more productive, but more passive as well.
Dude's an engineering manager who codes maybe 5% of the time and his joy is decreasing. AI is not the problem, it's being an engineering manager.
I think a lot of this discussion is moot - it all devolves into the same arguments rehashed between people who like using AI and people who do not.
What we really need are more studies on the productivity and skill outcomes of using AI tools. Microsoft did one, with results that were very negative towards AI tools [1]. I would like to see more (and much larger cohort) studies along this line, whether they validate Microsoft's conclusions or oppose them.
Personally I do not find AI coding tools to be useful at all - but I have not put extensive time into developing a "skillset" to use them optimally. Mainly because I believe, similar to what the study by MS found, that they are detrimental to my critical reasoning skills. If this turns out to be wrong, I would not mind evaluating changing course on that decision - but we need more data.
i dont know where you are working, but where I work i cant prompt 90% of my job away using cursor. in fact, I find all of these tools to be more and more useless and our codebase is growing and becoming more complex
based on the current state of AI and the progress im witnessing on a month-by-month basis - my current prediction is there is zero chance AI agents are going to be coding and replacing me in the next few years. if i could short the startups claiming this, I would.
Don't get distracted by claims that AI agents "replace programmers". Those are pure hype.
I'm willing to bet that in a few years most of the developers you know will be using LLMs on a daily basis, and will be more productive because of it (having learned how to use it).
I have the same experience. It‘s basically a better StackOverflow, but just like with SO you have to be very careful about the replies, and also just like SO its utility diminishes as you get more proficient.
As an example, just today I was trying to debug some weird WebSocket behaviour. None of the AI tools could help, not Cursor, not plain old ChatGPT with lots of prompting and careful phrasing of the problem. In fact every LLM I tried (Claude 3.7, GPT o4-mini-high, GPT 4.5) introduced errors into my debugging code.
I’m not saying it will stay this way, just that it’s been my experience.
I still love these tools though. It’s just that I really don’t trust the output, but as inspiration they are phenomenal. Most of the time I just use vanilla ChatGPT though; never had that much luck with Cursor.
Yeah, they're currently horrible at debugging -- there seems to be blind spots they just can't get past so end up running in circles.
A couple days ago I was looking for something to do so gave Claude a paper ("A parsing machine for PEGs") to ask it some questions and instead of answering me it spit out an almost complete implementation. Intrigued, I threw a couple more papers at it ("A Simple Graph-Based Intermediate Representation" && "A Text Pattern-Matching Tool based on Parsing Expression Grammars") where it fleshed out the implementation and, well... color me impressed.
Now, the struggle begins as the thing has to be debugged. With the help of both Claude and Deepseek we got it compiling and passing 2 out of 3 tests which is where they both got stuck. Round and round we go until I, the human who's supposed to be doing no work, figured out that Claude hard coded some values (instead of coding a general solution for all input) which they both missed. In applying ever more and more complicated solutions (to a well solved problem in compiler design) Claude finally broke all debugging output and I don't understand the algorithms enough to go in and debug it myself.
Of course I didn't use any sort of source code management so I could revert to a previous version before it was broken beyond all fixing...
Honestly, I don't even consider this a failure. I learned a lot more on what they are capable of and now know that you have to give them problems in smaller sections where they don't have to figure out the complexities of how a few different algorithms interact with each other. With this new knowledge in hand I started on what I originally intended to do before I got distracted with Claude's code solution to a simple question.
--edit--
Oh, the irony...
After typing this out and making an espresso I figured out the problem Claude and Deepseek couldn't see. So much for the "superior" intelligence.
This has become especially true for me in the past four months. The new long context reasoning models are shockingly good at digging through larger volumes of gnarly code. o3, o4-mini and Claude 3.7 Sonnet "thinking" all have 200,000 token context limits, and Gemini 2.5 Pro and Flash can do 1,000,000. As "reasoning" models they are much better suited to following the chain of a program to figure out the source of an obscure bug.
Makes me wonder how many of the people who continue to argue that LLMs can't help with large existing codebases are missing that you need to selectively copy the right chunks of that code into the model to get good results.
But 1 million tokens is like 50k lines of code or something. That's only medium sized. How does that help with large complex codebases?
What tools are you guys using? Are there none that can interactively probe the project in a way that a human would, e.g. use code intelligence to go-to-definition, find all references and so on?
This to me is like every complaint I read when people generate code and the LLM spits out an error, or something stupid. It's a tool. You still have to understand software construction, and how to hold the tool.
Our Rust fly-proxy tree is about 80k (cloc) lines of code; our Go flyd tree (a Go monorepo) is 300k. Generally, I'll prompt an LLM to deal with them in stages; a first pass, with some hints, on a general question like "find the code that does XYZ"; I'll review and read the code itself, then feed that back to the LLM with questions like "summarize all the functionality of this package and how it relates to other packages" or "trace the flow of an HTTP request through all the layers of this proxy".
Generally, I'll take the results of those queries and have them saved in .txt files that I can reference in future prompts.
I think sometimes developers are demanding something close to AGI from their tooling, something that would do exactly what they would do (only, in the span of about 15 seconds). I don't believe in AGI, and so I don't expect it from my tools; I just want them to do a better job of fielding arbitrary questions (or generating arbitrary code) than grep or eglot could.
Yeah, 50,000 lines sounds about right for 1m tokens.
If your codebase is larger than that there are a few tricks.
The first is to be selective about what you feed into the LLM: if you know the work you are doing is in a particular area of the codebase, just paste that bit in. The LLM can make reasonable guesses about things the code references that it can't see.
An increasingly effective trick is to arm a tool-using LLM with a tool like ripgrep (effectively the "interactively probe the project in a way that a human would" idea you suggested). Claude Code and OpenAI Codex both use this trick. The smarter models are really good at deciding what to search for and evaluating the results.
I've built tools that can run against Python code and extract just the class, function and method signatures and their docstrings - omitting the actual code. If you code is well designed and has reasonable documentation that could be enough for the LLM to understand it.
This means I can feed in the exact code that the model needs in order to solve a problem. Here's a recent example:
llm -m openai/o3 \
-f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
-f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
-s 'Write a new fragments plugin in Python that registers issue:org/repo/123 which fetches that issue
number from the specified github repo and uses the same markdown logic as the HTML page to turn that into a fragment'
in the meantime im having lots of fun coding and using AI, reinventing every wheel i can. 0 stress cos i don't do it for money :).
I think a lot of people are having a tantrum because programing is not sexy anymore, its getting easier, the bar is lower now , the quality is awful and nobody cares. its like any other boring soul crushing job.
also if you want to see the real cost (at least part of it) of AI coding or the whole fucked up IT industry, go to any mining town in the global south.
>"...the one thing that currently worries me most about using AI for software development: lack of joy."
I struggled with this at first too. But it just becomes another kind of joy. Think of it like jogging versus riding a motorcycle. Jogging is fun, people enjoy it, and they always will. But flying down a canyon road at 90MPH and racing through twists and turns is... way more fun. Once you've learned how to do it. But there's a gap there in which it stops being fun until you do.
That’s an interesting analogy but I do disagree with it.
I would say that programming without an AI is like riding a motorcycle. You’re in complete control and it’s down to your skill to get you we’re your going.
While using AI is like taking a train. You got to plan the route but you’re just along for the ride.
Which I think lines up to the article. If you want to get somewhere easily and fast, take a train. But that does take away the joy of the journey.
One of the things people often overlook don't talk about in this arguments is the manager's point of view and how it's contributing to the shakeups in this industry.
As a developer I'm bullish on coding agents and GenAI tools, because they can save you time and can augment your abilities. I've experienced it, and I've seen it enough already. I love them, and want to see them continue to be used.
I'm bearish on the idea that "vibe coding" can produce much of value, and people without any engineering background becoming wildly productive at building great software. I know I'm not alone. If you're a good problem solver who doesn't know how to code, this is your gateway. And you better learn what's happening with the code while you can to avoid creating a huge mess later on.
Developers argue about the quality of "vibe coded" stuff. There are good arguments on both sides. At some point I think we all agree that AI will be able generate high quality software faster than a human, someday. But today is not that day. Many will try to convince you that it is.
Within a few years we'll see massive problems from AI generated code, and it's for one simple reason:
Managers and other Bureaucrats do not care about the quality of the software.
Read it again if you have to. It's an uncomfortable idea, but it's true. They don't care about your flow. They don't care about how much you love to build quality things. They don't care if software is good or bad they care about closing tickets and creating features. Most of them don't care, and have never cared about the "craft".
If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together. A wall is a wall. That's how the majority of managers view software development today. By the time that shoddy wall crumbles they'll be at another company anyway so it's someone else's problem.
When I talk about the software industry collapsing now, and in a few years we're mired with garbage software everywhere, this is why. These people in "leadership" are salivating at the idea of finally getting something for nothing. Paying a few interns to "vibe code" piles of software while they high five each other and laugh.
It will crash. The bubble will pop.
Developers: Keep your skills sharp and weather out the storm. In a few years you'll be in high demand once again. When those walls crumble, they will need people who what they're doing to repair it. Ask for fair compensation to do so.
Even if I'm wrong about all of this I'm keeping my skills sharp. You should too.
This isn't meant to be anti-management, but it's based on what I've seen. Thanks for coming to my TED talk.
* And to the original point, In my experience the tools interrupt the "flow" but don't necessarily take the joy out of it. I cannot do suggestion/autocomplete because it breaks my flow. I love having a chat window with AI nearby when I get stuck or want to generate some boilerplate.
I think you are right with the building analogy. Most stuff built in the last 25 years is crap quality! But importantly looks nice. Needs to look nice at first.
> If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together.
IDK, there's still a place in society for master masons to work on 100+ year old buildings built by other master masons.
Same with the robots. They can implement solutions but I'm not sure I've heard of any inventing an algorithmic solution to a problem.
Earlier this year, a hackernews started quizzing me about the size and scope of the projects I worked on professionally, with the implication that I couldn't really be working on anything large or complex -- that I couldn't really be doing serious development, without using a full-fat IDE like IntelliJ. I wasn't going to dox myself or my professional work just so he could reach a conclusion he's already arrived at. The point is, to this person, beyond a certain complexity threshold -- simple command-line tools, say -- an IDE was a must, otherwise you were just leaving productivity on the table.
People are going to be making the same judgements about AI-assisted coding in the near future. Sure, you could code everything yourself for your own personal enrichment, or simply because it's fun. But that will be a pursuit for your own time. In the realm of business, it's a different story: you are either proompting, or you're effectively stealing money from your employer because you're making suboptimal use of the tools available. AI gets you to something working in production so much faster that you'd be remiss not to use it. After all, as Milt and Tim Bryce have shown, the hard work in business software is in requirements analysis and design; programming is just the last translation step.
So if I'm understanding this, there are two central arguments being made here.
1. AI Coding leads to a lack of flow.
2. A lack of flow leads to a lack of joy.
Personally, I can't find myself agreeing with the first argument. Flow happens for me when I use AI. It wouldn't surprise me if this differed developer to developer. Or maybe it is the size of requests I'm making, as mine tend to be on the smaller size where I already have an idea of what I want to write but think the AI can spit it out faster. I also don't really view myself as prompt engineering; instead it feels more like a natural back and forth with the AI to refine the output I'm looking for. There are times it gets stubborn and resistant to change but that is generally a sign that I might want to reconsider using AI for that particular task.
One trend I've been finding interesting over the past year is that a lot of engineers I know who moved into engineering management are writing code again - because LLMs mean they can get something productive done in a couple of hours where previously it would have taken them a full day.
Managers usually can't carve out a full day - but a couple of hours is manageable.
See also this quote from Gergely Orosz:
Despite being rusty with coding (I don't code every day
these days): since starting to use Windsurf / Cursor with
the recent increasingly capable models: I am SO back to
being as fast in coding as when I was coding every day
"in the zone" [...]
When you are driving with a firm grip on the steering
wheel - because you know exactly where you are going, and
when to steer hard or gently - it is just SUCH a big
boost.
I have a bunch of side projects and APIs that I operate -
but usually don't like to touch it because it's (my)
legacy code.
Not any more.
I'm making large changes, quickly. These tools really
feel like a massive multiplier for experienced devs -
those of us who have it in our head exactly what we want
to do and now the LLM tooling can move nearly as fast as
my thoughts!
> a lot of engineers I know who moved into engineering management are writing code again
They should be managing instead. Not to say that they can't code their own tools, but the statement sounds like a construction supervisor nailing studs or welding steel bars. Can work for a small team, but that's not your primary job.
I've been an engineering manager and it's a lot easier to make useful decisions that your team find credible if you can keep your toes in the water just a little bit.
My golden rule is to stay out of the critical path of shipping a user-facing feature: if a product misses a deadline because the engineering manager slipped on their coding commitments, that's bad.
The trick is to use your minimal coding time for things that are outside of that critical path: internal tools, prototypes, helping review code to get people unstuck, that kind of thing.
Yeah I think flow is more about holding a lot of knowledge about the code and its control flow in your head at a time. I think there's an XKCD or something that illustrates that.
You still need to do that if you're using AI, otherwise how do you know if it's actually done a good job? Or are people really just vibe coding without even reading the code at all? That seems... unlikely to work.
Some people love programming, for the sake of programming itself. They love the CS theory, they love the tooling, they love most everything about it.
Other people see all that as an means to an end - and find no joy from the technical aspect of creating something. They're more interested in the end result / product, rather than the process itself.
I think that if you're in group A, it can be difficult to understand group B. In vice versa.
I'm a musician, so I love everything about creating music. From the theory, to the mastery of the instrument, the tens of thousands of hours I've poured into it...finally being able to play something I never thought I'd be able to, just by sheer willpower and practice. Coming up with melodies that feel something to me, or I can relate to something.
On the other hand, I know people that want to jump straight to the end result. They have some melody or idea in their head, and they just want to generate some song that revolves around that idea.
I don't really look down on those people, even though the snobs might argue that they're not "real musicians". I don't understand them, but that's not really something I have to understand either.
So I think there are a lot of devs these days, that have been honing their skills and love for the craft for years, that don't understand why people just want things to be generated, with no effort.
> Some people love programming
> Other people see all that as an means to an end
I think it's worth pointing out that most people are both these things at different times.
There's things I care about and want a deep understanding of but there's plenty of tasks I want to just "go away". If I had an junior coder - I'd be delegating these. Instead I use AI when I can.
There's also tasks where I want a jump start. I prefer fixing/improving code over writing from scratch so often a bad AI attempt is still valuable to me.
You likely don’t have a say in the matter, but you should have a junior developer. That’s where senior developers come from.
Why should I have a junior developer who is going to do negative work instead of poaching a mid developer who is probably underpaid since salary compression and inversion are real?
As a manager, say I do hire a junior developer, invest time into them and they level up. I go to the HR department and tell them that they deserve a 30% raise to bring them inline with the other mid level developers.
The HR department is going to say that’s out of policy and then the developer jumps ship.
> Why should I have a junior developer who is going to do negative work instead of poaching a mid developer who is probably underpaid since salary compression and inversion are real?
The tragedy of the commons in a nutshell. Maybe everyone should invest in junior developers so that everyone has mid-level developers to poach later?
Not only that but teaching is a fantastic way to learn. Its easy to miss the learning though because you get the most when you care. If you care you take time to think and you're forced to contend with things you've taken for granted. You're forced to revisit the things you've tabled because you didn't have the time or expertise to deal with it at the time.
There's no doubt about it, there's selfish reasons to teach, mentor, and have a junior under you. We're social creatures. It should be no surprise that what's good for the group is usually good for yourself too. It's kinda as if we were evolutionarily designed to be this way or something ¯\_(ツ)_/¯
Everyone says they don't have time, but you get a lot of time by doing things right instead of doing things twice. And honestly, we're doing it a lot more than twice.
I just don't understand why we're so ready and willing to toss away a skill that allowed us to become the most successful creature on the planet: forethought. It's not just in coding but we're doing it everywhere. Maybe we're just overloaded but you need forethought to fix that, not progressively going fast for the sake of going fast
I’m not a manager by the way, my previous comment was more of a devil’s advocate/hypothetical question.
I leveled up because I practice mentoring others. But it still doesn’t make sense for the organization to hire juniors. Yes I realize someone has to. It’s especially true for managers who have an open req to fill because they need work done now.
On the other hand, my one, only and hopefully last role in BigTech where I worked previously, they could afford to have an intern program and when they came back after college have a 6 month early career/career transition program to get them up to speed. They could afford the dead weight loss.
Many have said that it's useful to delegate writing boilerplate code to an AI so that you can focus on the interesting bits that you do want to write yourself, for the sake of enjoying writing code.
I recognize that and I kind of agree, but I think I don't entirely. Writing the "boring" boilerplate gives me time to think about the hard stuff while still tinkering with something. I think the effect is similar to sleeping on it or taking a walk, but without interrupting the mental cruncing that's going in my brain during a good flow. I piece together something mundane that is as uninteresting as it is mandatory, but at the same time my subconscious is thinking about the real stuff. It's easier that way because the boilerplate does actually, besides being boring, still connect to the real stuff, ultimately.
So, you're kind of working on the same problem even if you're just letting your fingers keep typing something easy. That generates nice waves of intensity for my work. My experience regarding AI tends to break this sea of subconsciousness: you need to focus on getting the AI to do the right thing which, unlike typing it yourself, is ancillary to the original problem. Maybe it's just a matter of practise and at some point I can keep my mind on the domain in question eventhough I'm working an AI instead of typing boilerplate myself.
The first time you write the code to accomplish something you get your highs.
IMHO there's no joy in doing the same thing multiple times. DRY doesn't help with that, you end up doing a lot of menial work to adapt or integrate previous code.
Most of the for-profit coding is very boring.
I've always distilled this down to people who like the "craft" and those who like the "result".
Of course, everything is on a scale so it's not either/or.
But, like you, how I get there matters to me, not just the destination.
Outside the context of music, a project could be super successful but if the journey was littered with unnecessary stress due to preventable reasons, it will still leave a bad taste in my mouth.
> I've always distilled this down to people who like the "craft" and those who like the "result".
I find it very unlikely anyone who only likes the results will ever pick up the craft in the first place
It takes a very specific sort of person to push through learning a craft they dislike (or don't care about) just because they want a result badly enough
I hate IT, will pick literally anything else to work at, but the money is an issue.
I have a love/hate relationship with tech, but it would take many paragraphs to explain it :)
I love IT because it is a way to earn decent money, a way to escape from poverty. Hate everything else, though.
What's "the result"? Because I don't like how this divide is being stated (it's pretty common).
Seems to me that "the result" is "the money" and not "the product".
Because I'd argue those that care about the product, the thing being built, the tool, take a lot of pride in their work. They don't cut corners. They'll slog through the tough stuff to get things done.
These things align much more with the "loves coding" group than "the result". Frankly, everyone cares about "the result" and I think we should be clear about what is actually meant
The issue with programming is that it isn't like music or really any other skill where you get feedback right away and operate in a well understood environment. And a lot of patterns are not well designed as they are often based on what a single developer things the behavior ought to be instead of something more deterministic like the laws of physics that influence the cord patterns we use in music.
Nope, your code might look excellent. Why the hell isn't it running though? Three hours later you find you added a b when you closed your editor somewhere in the code in a way your linter didn't pick up and the traceback isn't clear about, maybe you broke some all important regex, it doesn't matter. One second, it's fixed, and you just want to throw the laptop out the window and never work on this project again. So god damned stupid.
And other things are frusterating too. Open a space deliminated python file, god forbid you add a tab without thinking. And what is crazy about that is if the linter is smart enough to say "hey you put a tab here instead of spaces for indent" then why does it even throw the error and not just accept both spaces and tabs? Just another frustration.
Really I would love to just go at it, write code, type, fly, be in the flow state, like one does building something with the hands or making music or doing anything in the physical world. But no. Constant whack a mole. Constantly hitting the brakes. Constant blockers. How long will this take to implement? I have no fucking idea man, could be 5 seconds or 5 weeks and you don't often know until you spend the 5 seconds and see that didn't do it yet.
I’m in group A and B. I do programming for the sake for it at home. I read tons of technical books for the love of it. At work, though, I do whatever the company wants or whatever they allow me… I just do it for the money.
Some writers like to write. Some like to have written.
Some people like to play a musical instrument, others to compose music. Those who play range from classicists, who have limited ability to improvise or interpret, to popular or jazz, or composition, where creativity and subtle expression is the life blood of the work.
Programming is similar to music. (A great many software innovators in the 70s and 80s had musical roots). But AI prunes away all the creativity and stylistic expression from the composition and the performance when designing and building software, reducing the enterprise to mere specification -- as if the libretto of the opera were merely an outline, and even that was based on Cliff Notes.
The case for using AI to code is driven strictly by economics and speed. Stylistically and creatively, AI is a no-brainer.
I think I am somewhere between the two groups you mention
I don't really get any joy from the act of coding, but I also take a lot of pride in doing a good job.
Cutting corners and producing sloppy work is anathema to me, even when I don't really enjoy the work itself
Any work worth doing is worth doing a good job on, even if I don't enjoy the work itself
I think a closer analogy is:
- A singer might learn to play guitar to sing along to it. Guitar is a means to an end; it is simply a tool to them.
- A guitarist learns to play guitar due to love of the instrument.
Sounds a bit like the different subjects of "applied math" vs "math"
Some like proving and deriving, for others it's a tool to solve other problems
> On the other hand, I know people that want to jump straight to the end result. They have some melody or idea in their head, and they just want to generate some song that revolves around that idea. I don't really look down on those people, even though the snobs might argue that they're not "real musicians". I don't understand them, but that's not really something I have to understand either.
So if someone generates their music with AI to get their idea to music you don’t look down on it?
Personally I do, if you don’t have the means to get to the end you shouldn’t get to the end and that goes double in a professional setting. If you are just generating for your own enjoyment go off I guess but if you are publishing or working for someone that’ll publish (aka a professional setting) you should be the means to the end, not AI.
Where do you draw that line though?
If you're talking about a person using an LLM, or some other ML system, to help generate their music then the LLM is really just a tool for that person.
I can't run 80 mph but I can drive a car that fast, its my tool to get the job done. Should I not be allowed to do that professionally if I'm not actually the one achieving that speed or carrying capacity?
Personally my concerns with LLMs are more related to the unintended consequences and all the unknowns in play given that we don't really know how they work and aren't spending much effort solving interoperability. If they only ever end up being a tool, that seems a lot more in line with previous technological advancements.
> I can't run 80 mph but I can drive a car that fast, its my tool to get the job done.
Right, but if you use a chess engine to win a chess championship or if you use a motor to win a cycling championship, you would be disqualified because getting the job done is not the point of the exercise.
Art is (or should be) about establishing dialogues and connections between humans. To me, auto-generated art it's like choosing between seeing a phone picture of someone's baby and a stock photo picture of a random one - the second one might "get the job done" much better, but if there's no personal connection then what's the point?
> I can't run 80 mph but I can drive a car that fast
If you drive a car 80mph you don't get to claim you are a good runner
Similarly if you use an LLM to generate 10k lines of code, you don't get to claim you are a good programmer
Regardless of the outcome being the "same"
You do get to claim that you’re a good getting-places-er, though, which is the only point of commercial programming.
Project Managers will tell you that "getting to a place" is the goal
Then you get to the place and they say "now load all of the things in the garage into the truck"
But oops. You didn't bring a truck, because all they told you was "please be at this address at this time", with no mention of needing a truck
My point is that the purpose of commercial programming is not usually just to get to the goal
Often the purpose of commercial programming is to create a foundation that can be extended to meet other goals later, that you may not even be remotely aware of right now
If your foundation is a vibe coded mess that no one understands, you are going to wind up screwed
And yes, part of being a good programmer includes being aware of this
I work with quite a few F100 companies. The actual amount of software most of them create is staggering. Tens of thousands of different applications. Most of it is low throughput and used by a small number of employees for a specific purpose with otherwise low impact to the business. This kind of stuff has been vibe coded long before there was AI around to do it for you.
At the same time human ran 'feature' applications like you're talking about often suffer from "let the programmer figure it out" problems where different teams start doing their own things.
Why?
What has always held true so far: <new tool x> abstracts challenging parts of a task away. The only people you will outcompete are those, who now add little over <new tool x>.
But: If in the future people are just using <new tool x> to create a product that a lot of people can easily produce with <new tool x>, then, before long, that's not enough to stand out anymore. The floor has risen and the only way to stand out will always be to use <new tool x> in a way that other people don't.
People who can't spin pottery shouldn't be allowed to have bowls, especially mass produced by machine ones.
I understand your point, but I think it is ultimately rooted in a romantic view of the world, rather than the practical truth we live in. We all live a life completely inundated with things we have no expertise in, available to us at almost trivial cost. In fact it is so prevalent that just about everyone takes it for granted.
Sure, but they also shouldn't claim they're potters because they went to Pottery Barn.
Sounds like Communist Albania where everybody had to be able to repair the car and take it apart and put it back together to own one
> So if someone generates their music with AI to get their idea to music you don’t look down on it?
It depends entirely on how they're using it. AI is a tool, and it can be used to help produce some wonderful things.
- I don't look down on a photographer because they use a tool to take a beautiful picture (that would have taken a painter longer to paint)
- I don't look down on someone using digital art tools to blur/blend/manipulate their work in interesting ways
- I don't look down on musicians that feed their output through a board to change the way it sounds
AI (and lots of other tools) can be used to replace the creative process, which is not great. But it can also be used to enhance the creative process, which _is_ great.
If they used an algorithm to come up with a cool melody and then did something with it, why look down on it?
Look at popular music for the last 400 years. How is that any different than simply copying the previous generations stuff and putting your own spin on it?
If you heard a CD in 1986 then in 2015 you wrote a song subconsciously inspired by that tune, should I look down on you?
I mean, I'm not a huge fan of electronic music because the vast majority of it sounds the same to me, but I don't argue that they are not "real musicians".
I do think that some genres of music will age better than others, but that's a totally different topic.
I think you don't look down at the product of AI, only the process that created it. Clearly the craft that created the object has become less creative, less innovative. Now it's just a variation on a theme. Does such work really deserve the same level of recognition as befitted Beethoven for his Ninth or Robert Bolt for his "A Man for all Seasons"?
Your company doesn’t care about how you got to the end, they just care about did you get there and meet all of the functional and non functional requirements.
My entire management chain - manager, director and CTO - are all technical and my CTO was a senior dev at BigTech less then two years ago. But when I have a conversation with any of them, they mostly care about whether the project I’m working on/leading is done on time/within budget/meets requirements.
As long as those three goals are met, money appears in my account.
One of the most renown producers in hip hop - Dr. Dre - made a career in reusing old melodies. Are (were) his protégés - Easy-E, Tupac, Snoop, Eminem, 50 cent, Kendrick Lamar, etc - not real musicians?
Have you heard the saying there is too much performance in the practice room? It's the same with programming. Performance is the goal, and practice is how you get there. No one seems to be in danger of practicing too much though.
[dead]
i mean how far are you willing to take that argument? every decade has just been a new abstraction, imagine people flipping switches or in raw assembly talking about how they don't "understand" you now with your no effort. or even those who don't "understand" why you use your autocomplete and fancy IDE, preferring a simple text editor.
i say this as someone who cut my teeth on this stuff growing up and seeing the evolution, it's both. and at some point it's honestly elitism and gatekeeping. i sort of cringe when it's called a "craft" because it's not like woodworking or something. the process is both full of joy but so is the end result, and the nature of our industry is that the process is ALWAYS changing.
you accumulate a depth of knowledge and watch as it washes away in a few years. that kind of change, and the big kind of change that AI brings scares people so they start clinging to it like it's some kind of centuries old trade lol.
It is not just gatekeeping. It is a stubborn refusal to see that one could be programming something much more sophisticated if they could use these iteration loops efficiently.
Many of these folks would do well to walk over to the intersection of Market, Bush, and Battery Streets in San Francisco and gaze up at the Mechanics Monument.
> It is a stubborn refusal to see that one could be programming something much more sophisticated if they could use these iteration loops efficiently
Programming something more sophisticated with AI? AI is pretty much useless if you're doing anything somewhat novel. What it excels at is vomiting code that has already been written a million times so you can build yet another Electron cross-platform app.
what sort existing of projects do you think couldn't have been created with an AI-heavy workflow?
I have actually had some really great flow evenings lately, the likes of which I have not enjoyed in many years, precisely because of AI-assisted coding. The trick is to break the task down in to components that are of moderate complexity so that the AI can handle them (Gemini 2.5 Pro one-shots), and keep your mind on the high-level design which today's AI cannot coordinate.
What helps me is to think of it like I'm a kid again, learning to code full of ideas but without any pre-conceived notions. Rather than the Microsoft QuickBasic manual in my hands, I've got Gemini & Claude Code. I would be gleefully coding up a storm of games, websites, dubious webcrawlers, robots, and lord knows what else. Plenty of flow to be had.
I always wonder what kind of projects are we talking about. I am currently writing a compiler and simulation engine for differential-algebraic equations. I tried few models, hoping they would help me, but they could not provide any help with small details nor with bigger building blocks.
I guess if you code stuff that had been coded a lot in public repos, it is fine, otherwise AI does not help in any way. Actually, I think I wasted more time trying to make it produce the output I wish for than it took me to do this myself.
That's been my experience. If it's been solved a million times, it's helpful. If you're out on the frontier where there's no public code, it's worse than useless.
If you're somewhere in between (where I am now) it's situationally useful for small sub-components but you need to filter it heavily or you'll end up wasting a day or two going down a wrong rabbit-hole either because you don't know the domain well enough to tell when it's bullshitting or going down a wrong path, or don't know the domain well enough to use the right keyword to get it to cough up something useful. I've found domain knowledge essential for deciding when it's doing something obviously wrong instead of saying "I don't know" or "This is the wrong approach to the problem".
For the correct self-contained class or block of code, it is much faster to specify the requirements and go through a round or two of refinement than it is to write it myself. For the wrong block of code it's a complete waste of time. I've experienced both in the last few days.
I don't even think you have to be on the frontier for LLMs to lose most of their effectiveness. Large legacy codebases with deeply ingrained tribal knowledge and loads of idiosyncrasies and inconsistencies will do the trick. Sad how most software projects end in this state.
Obviously LLMs in this situation will still be insanely helpful, but in the same way that Google searches or stack overflow is insanely helpful.
For me it's been toy games built on web languages, which happens to be something I toyed with via my actual raw skills for the past 15 years. LLMs have opened many new doors and options for what I can build because I now technically "know everything" in the world via LLMs. Stuff that I would get stuck wasting hours on is now solved in minutes. But then it ALWAYS reaches a point where the complexity the LLM has generated is too much and the model can no longer iterate on what it's built.
people seem to forget this type of argument from the article was used for stack overflow for years, calling it the destruction of programming. "How can you get into flow when you are just copying and pasting?". Those same people are now all sour grapes for AI assisted development. There will always be detractors saying that the documentation you are using is wrong, the tools that you are using are wrong, and the methodology you are using is wrong.
AI assisted development is no different from managing an engineering team. "How can you trust outsourced developers to do anything right? You won't understand the code when it breaks"... "How can you use an IDE, vim is the only correct tool" etc etc etc.
Nothing has changed besides the process. When people started jumping on object orientation they called procedures the devil itself, just as procedures were once called structured programming and came to banish away the considered harmful goto. Everything is considered harmful when theres something new around the corner that promises to either make development more productive or developers more interchangeable. These are institutional requirements and will never go away.
Embrace AIOP (AI oriented programming) to banish copy and paste google driven development which is now considered harmful.
The issue with "AIOP" is that you don't have a litany of others (as is the case with SO) providing counter examples, opinions, updated best practices, etc. People take the AI output as gospel and suffer for it without being exposed so the ambiguity that surrounds implementing things.
Will an engineering team ever be able to craft a thing of wonder, that surprises and delights? I think great software can do that. But I've seen it arise only rarely, and almost always as originating from one enlightened mind, someone who imagined a better way than the well-trod paths taken by so many who went before. I can imagine AI as a means to go only 'where man gas gone before'.
I'm a classic engineer, so lots of experience with systems and breaking down problems, but probably <150 hours programming experience over 15 years. I know how computers work and "think", but I an awful at communicating with them. Anytime I have needed to program something I gotta crash course the language for a few days.
Having LLMs like 2.5 now are total game changers. I can basically flow chart a program and have Gemini manifest it. I can break up the program into modules and keep spinning up new instances when context gets too full.
The program I am currently working on is up to ~5500 LOC, probably across 10ish 2.5 instances. It's basically an inventory and BOM management program that takes in bloated excel BOMs and inventory, and puts it in an SQLite database, and has a nice GUI. Absolutely insane how much faster SQLite is for databases than excel, lol.
I've heard a _lot_ of stories like this. What I haven't heard is stories about the deployment of said applications and the ability of the human-side author to maintain the application. I guess that's because we're in early days for LLM coding, or the people who did this aren't talking (about their presumed failures... people tend to talk about successes publicly, not the failures).
At my day job I have 3 programs written by LLM used in production. One written by GPT-4 (in spring 2023) and recently upgraded by gemini 2.5, and the other two by Claude 3.7
One is a automatic electronics test system that runs tests and collects measurements (50k+ readings across 8-12 channels)(GPT-4, now with a GUI and faster DB thanks to 2.5). One is a QC tool to help quickly make QC reports in our companies standard form (3.7). And the last is a GUI CAD tool for rendering and quickly working through ancient manufacturing automation scripts from the 80's/90's to bring them up to compatibility with modern automation tooling (3.7).
I personally think that there is a large gap between what programs are, and how each end user ultimately uses them. The programs are made with a vast scope, but often used narrowly by individuals. The proprietary CAD program that we were going to use originally for the old files was something like $12k/yr for a license. And it is a very powerful software package. But we just needed to do one relatively simple thing. So rather than buy the entire buffet, buy the entire restaurant, Claude was able to just make simple burger.
Would I put my name on these and sell to other companies? No. Am I confident other LLM junkies could generate similar strongly positive outcomes with bespoke narrow scope programs? Absolutely.
Only 150 hours of programming in 15 years? Are you in more of an Architect / Tech Lead role than an IC (individual contributor) role?
I'm an electrical engineer and work mostly with power electronics.
This is the way. I feel like a kid too again. It's way more fun actually. As a kid I got too frustrated for not being able to install my WAMP stack.
Added joy for me as well mostly by giving me the relevant API calls I need straight away, from publically available documentation, instead of having to read docs myself. "How do I do X in Y"
And if something's not obvious I can always fetch the specifics of any particular calls. But at least I didn't have to find the name of that call in the first place.
I’m right there with you on this.
Thanks for the comment. You articulated how I feel about this situation very well.
There's nothing stopping you from coding if you enjoy it. It's not like they have taken away your keyboard. I have found that AI frees me up to focus on the parts of coding I'm actually interested in, which is maybe 5-10% of the project. The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about. I care about certain things that I know will make the product better, and achieve its goals in a clever and satisfying way.
Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.
> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.
Even more so, I remember making a Chrome extension and feeling intimidated. I knew that I'd be comfortable with most of it given that JS is used but I just didn't know how to start.
With an LLM it is way faster to spin up some default config and get going versus reading a tutorial. What I've noticed in that respect is that I just read what it does and then immediately reason why it's there. "Oh, there's a manifest.json file with permissions and a few other things, fair, makes sense. Oh, so you have the HTML/CSS/JS of the extension, you have the HTML/CSS/JS of the page you're injecting some code into and you have the JS of a background worker. Ah yea, I get that."
And then I just get immediately on coding.
> What I've noticed in that respect is that I just read what it does and then immediately reason why it's there ....
How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.
> How if it hallucinate and gives you wrong code
Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.
> It is better to read documentations and tutorials first.
I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.
Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".
As for my editor saying it is invalid..? That is just as untrustworthy as an LLM.
>I "trust" LLM's more than tutorials, there's so much garbage out there.
Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
> Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".
I interpreted the "hallucination" part as the AI using functions that don't exist. I don't consider that a problem because it's immediately obvious.
Yes, AI can suggest syntactically valid code that does the wrong thing. If it obviously does the wrong thing, then that's not really an issue either because it should be immediately obvious that it's wrong.
The problem is when it suggests something that is syntactically valid and looks like it works but is ever slightly wrong. But in my experience, it's pretty common to come across that stuff like that in "tutorials" as well.
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
I pretty strongly disagree. As soon as it became popular for developers to have a "brand", the amount of garbage started growing. The stuff written before the late 00's was mostly good, but after that the balance began slowly shifting towards garbage. AI definitely increased the rate at which garbage was generated though.
> Yes, AI can suggest syntactically valid code that does the wrong thing
To be fair, I as a dev with ten or fifteen years experience I do that too. That's why I always have to through test the results of new code before pushing to production. People act as if using AI should remove that step, or alternatively, as if it suddenly got much more burdensome. But honestly it's the part that has changed least for me since adopting an AI in the loop workflow. At least the AIncan help with writing automated tests now which helps a bit.
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
Emphatic no.
There were heaps of rubbish being generated by people for years before the advent of AI, in the name of SEO and content marketing.
I'm actually amazed at how well LLMs work given what kind of stuff they learned from.
Wait, are you saying you don't trust language servers embedded in IDEs to tell you about problems? How about syntax highlighting or linting?
Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?
Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.
[0]: https://knowyourmeme.com/memes/how-to-draw-an-owl
The kind of documentation you’re looking for is called a tutorial or a guide, and you can always buy a book for it.
Also something are meant to be approached with the correct foundational knowledge (you can’t do 3D without geometry, trigonometry, and matrixes. And a healthy dose of physics). Almost every time I see people strugling with documentation, it was because they lacked domain knowledge.
What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?
That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".
Fair question. So far I've seen two things:
1. Code doesn't compile. This case is obvious on what to do.
2. Code does compile.
I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.
You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.
There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.
AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)
Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.
I don't think so.How can you be so sure it solves the 'unknown unknowns'?
Sample size of 1, but it definitely did in my case. I've gained a lot more confidence when coding in domains or software stacks I've never touched before, because I know I can trust an LLM to explain things like the basic project structure, unfamiliar parts of the ecosystem, bounce ideas off off, produce a barebones one-file prototype that I rewrite to my liking. A whole lot of tasks that simply wouldn't justify the time expenditure and would make it effort-prohibitive to even try to automate or build a thing.
Because I've used it for problems where it hallucinated some code that didn't actually exist but that was good enough to know what the right terms to search for in the docs were.
I interpreted that as you rushing to code something you should have approached with a book or a guide first.
Most tutorials fail to add meta info like the system they're using and versions of things, that can be a real pain.
I think the fear for those of us who love coding, stability and security, that we are going to be confronted with apples that are rotten on the inside and our work, our love, is going to turn (even more so) into pain. The challenge in computing is that the powers that decide have little overview over the actual quality and longevity of any endeavour.
I work as a consultant assessing other people's code and it's hard not to lose my religion, sort of speak.
So much this. The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts) and lets me focus on the interesting bits like what it is I want to build and how the pieces should fit together. And debugging, which I find satisfying.
Sadly, I find it sorely lacking at dealing with build systems and that particular type of boilerplate, mostly because it seems to mix up different versions of things too much and gives you totally broken setups more often than not. I’d just as soon never deal with the he’ll that is front end build/lint/test config again.
> The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts)
AI generated tests are a bad idea.
AI generated tests are genuinely fantastic, if you treat them like any other AI generated code and review them thoroughly.
I've been writing Python for 20+ years and I still can't use unittest.mock without looking up the details every time. ChatGPT and Claude are great at that, which means I use it more often because I don't have to deal with the frustration of figuring it out.
Just as with anything else AI, you never accept test code without reviewing it. And often it needs debugging. But it handles about 90% of it correctly and saves a lot of time and aggravation.
Well, maybe they just need X lines of so-called "tests" to satisfy some bullshit-job metrics.
Aren't stdlib functions the ones you know by heart after a while anyways?
Depends on the language. Python for instance has a massive default library, and there are entire modules I use anywhere from one a year to once a decade —- or never at all until some new project needs them.
Not everyone works in a single language and/or deep in some singular code base.
Gee do you think maybe that's why all our software sucks balls these days?
I struggle to think how one person is supposed to interact with that many languages on a daily (or even weekly) basis.
I’ve been on projects with multiple languages, but the truly active code was done in only two. The other languages were used in completed modules where we do routine maintenance and rare alterations.
"I struggle to think how one person is supposed to interact with that many languages on a daily (or even weekly) basis."
LLMs. I've expanded the circle of languages I use on a frequent basis quite dramatically since I started leaning on LLMs more. I used to be Python, SQL and JavaScript only. These days I'm using jq, AppleScript, Bash, Go, awk, sed, ffmpeg and so many more.
I used to avoid infrequently used DSLs because I couldn't hold them in my memory. Now I'll happily use "the best tool for the job" without worrying about spinning up on all the details first.
They perhaps haven’t taken away your keyboard but anecdotally, a few friends work at places where their boss is requiring them to use the LLMs. So you may not have to code with them but some people are starting to be chained to them.
Yes, there are bad places to work. There are also places that require detailed time tracking, do not allow any time to write tests, have very long hours, tons of on-call alerts, etc.
How long until it becomes the rule because of some arbitrary "productivity" metric? Sure, you may not be forced to use it, but you'll be fire for being "unproductive".
That's the case at Shopify already: https://twitter.com/tobi/status/1909251946235437514
You write that like the latter is in opposition to the former. Yet the content suggests the latter is the former
And even when that's not the case you are still indirectly working with them because your coworker is and "somehow" their code has gotten worse
> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing
I keep seeing people saying to use an LLM to write boilerplate, but like... do you not just copy that from another project where you already wrote it?
No, because it's usually a few years old and already obsolete - the frameworks and the language have gone through a gazillion changes and what you did in 2021 suddenly no longer works at all.
I mean, the training data also has a cutoff date and changed beyond that are not reflected in the code suggestions.
Also, I know that people love to joke on modern software and JS in particular. But if you take react code from 2020 and drop it into a new react codebase it still works. Even class based components work. Yes, if you jumped on the newest framework bandwagon every time stuff will break all the time, but AI won’t be able to help you with that either. If you went for relatively stable frameworks, you can re use boilerplate completely or with relatively minimal adjustments
React is alright but the framework tooling around it changes a lot.
If you take a project from 2020 it's a bit of a pain to upgrade it.
True. But LLMs have access to the web. I’ve told ChatGPT plenty of times to verify an SDK API or if I knew the API was new, I just gave it a link to the documentation. This was mostly around various AWS SDKs
The search improvements to o3 and o4-mini have made a huge difference in the last couple of weeks.
I ran this prompt (and others like it) and it actually worked!
https://simonwillison.net/2025/Apr/18/gemini-image-segmentat...Ehh most people are good about at least throwing a warning before they break a legacy pattern. And you can also just use old versions of your tools. I'm sure the 2021 tool still does the job. Most people aren't working on the bleeding edge here. Old versions of numpy are fine.
lol, I've been cutting and pasting from the same projects I started in 2010. When you work in vanilla js it doesn't change.
I keep seeing that suggestion as well and the only sensible way I see would be to use one off boilerplate, anything else does not make sense.
If you keep re-using boilerplate once in a while copying it from elsewhere is fine. If you re-use it all the time, just get a macro setup in your editor of choice. IMHO that is way more efficient than asking AI to produce somewhat consistent boilerplate
You know. I have my boilerplate in Rails and it is just a work of art... I simply clone my BP repo, bundle, migrate, run and I have user management, auth, smtp client, sms alerts, and literally everything I need to get started. And it was just this same week I decided to try a code assistant, and my result was shockingly good, once you provide the assistant with a good clean starting point, and if you are very clear on what you want to build, then the results are just too good to be dismissed.
So yes, boilerplate, but also yes, there is definitely something to be gained from using ai assistants.
Like many others writing here, I enjoy coding (well, mostly anyway), especially the when it requires deep thought and patient experimentation to get anywhere. It's also great to preside over finally wiring together the routines (modules, libraries) that bind a project into a coherent whole.
Haven't much used AI to assist. After all, hard enough finding authentic humans capable and willing to voluntarily review/critique one's code. So far AI doesn't consistently provide that kind of help. OTOH seems almost certain over time AI systems will improve in terms of specific and comprehensive "insights" into the particular types of code one is writing.
I think an issue is that human creativity is hard to measure. Likely enough AI is even tougher to assess. Probably AI will increasingly be assigned tasks like constructing project skeletons, assuring parts can be joined together without undue strain, handling "boilerplate" and other routine chores. To be sure the landscape will look different in 50 years, I'm certain we'd be amazed were we able to see what future systems will be doing.
In any case, we shouldn't hesitate to use tools that genuinely boost our creativity. One badly needed role would be enabling development of higher reliability software. Still that's a far cry from the contributions emanating from the best of human originality, talent and motivation.
> doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.
That’s a lot of trauma you’re dealing with.
I read one characterization which is that LLMs don't give new information (except to the user learning) but they reorganize old information.
Custodians of human knowledge.
That’s only true if you tokenize words rather than characters. Character tokenization generates new content outside the training vocabulary.
All major tokenisers have explicit support for encoding arbitrary byte sequences. There's usually a consecutive range of tokens reserved for 0x00 to 0xFF, and you can encode any novel UTF-8 words or structures with it. Including emoji and characters that weren't a part of the model's initial training, if you show it some examples.
Pretty sure that we’re talking apples and oranges. Yes to the arbitrary byte sequences used by tokenizers, but that is not the topic of discussion. The question is will the tokenizer come up with words not in the training vocabulary. Word tokenizers don’t, but character tokenizers do.
Source: Generative Deep Learning by David Foster, 2nd edition, published in 2023. From “Tokenization” on page 134.
“If you use word tokens: …. willnever be able to predict words outside of the training vocabulary.”
"If you use character tokens: The model may generate sequences of characters that form words outside the training vocabulary."
Why stop there? Just have it spit out the state of the bits on the hardware. English seems like a serious shackle for an LLM.
Kind of, but character-based tokens make it a lot harder and more expensive to learn semantics.
Source: Generative Deep Learning by David Foster, 2nd edition, published in 2023. From “Tokenization” on page 134.
“If you use word tokens: …. willnever be able to predict words outside of the training vocabulary.”
"If you use character tokens: The model may generate sequences of characters that form words outside the training vocabulary."
I think you're robbing yourself.
Of course, it all depends how you use the LLM. While the same can be true for StackOverflow, the LLMs just scale the issues up.
Except you do care. It's why you're frustrated and annoyed. And good!!! That feeling is because what you're describing requires solving. If something is routine, automate it. But it's really not good to automate in a statistical way, especially when that statistical tool is optimized for human preference. Because remember that also means mistakes are optimized to be missed by humans.[0]With expertise in anything, I'm sorry, but you also got to do the shit work. To be a great musician you gotta practice boring scales. It's true even if you just want to be a sub par one.
But a little grumpy is good. It drives you to fix things, and frankly, that's our job. The things that are annoying and creating friction don't need be repeated over and over, they need alternative solutions. The scripts you build are valuable. The "useless" knowledge you gain isn't so useless. Those little details add up without you knowing and make you better.
That undocumented code makes you frustrated and reminds you to document your own. You don't want to be a hypocrite. The author of the thing you're using probably thought the same thing: "No one is gonna use this garbage, I'm not going to waste my time documenting it". Yet here we are. Over and over again yet we don't learn the lesson.
I'm not gonna deny there's assholes. There are. But even assholes teach you. At worst, they teach you how not to act.
And some people are you telling you to RTM and not RTFM. Sure, it has lots of extra information in it that you don't need to get your specific job done, but have you also considered that it has lots of extra information in it? The person that wrote it clearly thought the context was important. Maybe it isn't. In that case, you learned a lesson in how not to write documentation!
What I'm getting at is that there's a lot of learning done all over the place. Trying to take out all the work and only have "the fun" is harming yourself and has a large potential to make less time for the fun stuff[0]. I'd be surprised if I'm alone in this, but a lot of stuff I enjoy now was stuff that originally frustrated me. IME this is pretty common! It's true for every person I know. Similarly, it's also true for things I learned that I thought I'd never use again. It always has a way of coming back.
I'm not going to pretend it's all fun and games. I'm explicitly saying it's not. But I'm confident in the long run it's better. Despite the lack of accuracy, I use LLMs (and Google, and even the TFM) like I would a solution guide the homework problems when I was in school. Try first, then consult. The struggle is an investment in your future. It sucks, but if all the best things in life were easy then we'd all have them. I'm just trying to convince you that it pays off.
I'm completely aware this is all context dependent. There's a time and place for everything. But given the percentages you mention (even taken as exaggeration), something sounds wrong. It's hard to suggest specific solutions without details but I'd be surprised if there weren't better and more rewarding solutions than having the LLM do it for you
[0] That's the big danger and what drives distrust in them. Because you need to work extra hard to find mistakes, increasing workload, not decreasing, because debugging is most of the job!
I share the same opinion.
While it looks like a productivity boost, there's a clear price to pay. The more you use it, the less you learn and the less you are able to assess quality.
This.
Frankly I don't want to spend 2 hours reading documentation just to find out some arcane incantation that gets the computer to do what I want it to do.
The interesting part of programming to me is designing the logic. It's the 'this, then that, except when this' flow that I'm really interested in, not the search for some obscure library that has some function that will parse this csv.
Llms are great for that, and let me get away from the pointless grind and into the things that I enjoy and are actually providing value.
The pair programming is also a super good thing. I work best when I can spitball and throw out random ideas and get quick feedback. Llms let me do that without bothering others who have their own work to do.
Most comments here surprise me: I am using Githubs Copilot / ChatGPT 4.0 at work with a code base which is mostly implements a basic CRUD service... and outside of small/trivial example (where the generated code is mostly okay), prompting is more often than not a total waste of time. Now, I wonder if I am just totally unable to write/refine good prompts for the LLM (as it works for smaller samples, I hope I am not too far off) or what could explain the huge discrepancy of experience. (Just for the record: I would totally not mind if the LLM writes the code for the stuff I have to do at work.)
To clarify my questions: - Who here uses LLMs to generate code for bigger projects at work? (>= 20k lines of code) - If you use LLMs for bigger projects: Do you need to change your prompting strategy to get good results? - What programming languages are you using in your code bases? - Are there other people here who experience that LLMs are no help for non trivial problems?
I'm in the same boat. I've largely stopped using these tools other than asking questions about a language that I'm less familiar with or a complex type in typescript for which it can be helpful (sometimes). Otherwise, I felt like I was just wasting my time and becoming lazier/worse as a developer. I do wonder whether LLMs have hit a wall and we're in a hype cycle.
Yes, I have the same feeling about the wall/hype cycle. Most of my time is understanding code and formulating a plan to change code w/o breaking anything... even if LLMs would generate 100% perfect code on the first try, it would not help in a big way.
One thing I forgot to mention is asking LLMs questions from within the IDE instead of doing a web search... this works quite nice, but again, it is not a crazy productivity boost.
Same here. We have a massive codebase with large classes and the LLMs are not very helpful. Frontend stuff is okay sometimes but the backend models are too complex at this point, I guess.
Play with Cursor or Claude Code a bit and then make a decision. I am not on the this is going to replace Devs boat, but this has changed the way I code and approach things.
Could you perhaps point me to a youtube video which demonstrates an experienced prompter sculpting code with Cursor/Clause Code?
In my search I just found trivial examples.
My critic so far:
- Examples seem always to be creating a simple application from scratch
- Examples always use super common things (like create a blog / simple website for CRUD)
What I would love to see (see elsewhere): Adding a non trivial feature to a bigger code base. Just a youtube video/demonstration. I don't care about language/framework etc. ...
This morning I made this while sipping coffee, and it solves a real problem for my gf: https://github.com/kirubakaran/xmldiffer Sure it's not enormous, and it was built from scratch, but imho it's not a trivial thing either. It would've taken me at least a day or two of full time work, and I certainly don't have a couple of days to spare on a task like this. Instead, pair programming with AI made it into a fun relaxing activity.
I am happy to read your success story with LLM and thanks for sharing.
Fully agreed, that LLMs/assisted coding is nice for these kind of contained tasks.
You are just bad with prompting or working with very obscure language/framework or bad coding pattern or all of it. I had a talk with a seasoned engineer who has been coding for 50 years and has created many amazing things over lifetime about him having really bad results with AI tools I suggested for him. When I use AI for the same purposes in the same repo he's working on, it works nicely. When he does it, results are always not what he wants. It comes down to a combination of him not understanding how to guide the LLMs to correct direction and using a language/framework (he's not familiar with) he can't judge the LLMs output. It is really important to know what you want, be able to describe it in short points (but important points). Points that you know ai will mess up if you don't specify. And also be able to figure out which direction the ai is heading with the solution and correct it EARLY rather than later. Not overloading context/memory with unnecessary things. Focusing on key areas to improve and much more. I'm using AI to get solutions done that I can definitely do myself but it'll take a certain amount of time to hunt down all documentation, API/lib calls etc. With AI, 1/10th time is enough.
I've had massive success with java, js/TS, html css, go, rust, python, bitbucket pipelines/GitHub actions, cdk, docker compose, SQL, flutter/dart, swift etc.
I've had the same experience as the person to whom you're responding. After reading your post, I have to ask: if you're putting so much effort into prompting it with specific points, correcting it often, etc., why not just write the code yourself? It sounds like you're putting a good deal of effort into prompting it.
Aren't you worried that overtime you'll rely on it too much and your offhand knowledge will get worse?
I have read somewhere, that LLMs are mostly helpful to junior developers.
Is it possible the person claiming success with all these languages/tools/technologies is just on a junior level and is subjectively correct but has no point of reference how fast coding is for seniors and how quality code looks like?
Not OP, it be comes natural and doesn't take a lot of time.
Anyway, if you want to, LLMs can today help with a ton of programming languages and frameworks. If you use any of the top 5 languages and it still doesn't work for you, either you're doing some esoteric work or you're doing it wrong.
Could you point me to a youtube video or a blog post which demonstrates how LLMs help writing code which outperforms a proficient human?
My only conditions:
- It must be demonstrated by adding a feature on a bigger code base (>= 20 LOC)
- The added feature cannot be a leaf feature (means it must integrate with the rest of the system at multiple points)
- The prompting has to be less effort/faster than to type the solution in the programming language
You can chose any programming language/framework that you want. I don't care if it is Java, JavaScript, Typescript, C, Python, ... hell, I am fine with any language with or w/o a framework.
I do not rule out, that I am just very bad with prompting.
It just surprises me, that you write you had massive successes with "java, js/TS, html css, go, rust, python, bitbucket pipelines/GitHub actions, cdk, docker compose, SQL, flutter/dart, swift etc.", if you include the usual libraries/frameworks and the diverse application areas for these technologies, even with LLMs support it seems to me crazy to be able to make meaningful contributions in non trivial code bases.
Concerning SQL I can report another fail with LLMs, in a trivial code base with a handful of entities the LLM cannot come up with basic window functions.
I would be very interested if you could write up a blog post or could make a youtube video demonstrating your prompting skills... Perhaps demonstrating with a bigger open source project in any of the mentioned languages how to add a non trivial feature with your prompting skills?
Copilot is just plain bad. The result is day and night compare with cursor + gemini 2.5 (of course with good prompting)
Tooling and available context size matters a lot. I'm having decent luck with Gemini 2.5 and Roo Code.
> Now, I wonder if I am just totally unable to write/refine good prompts for the LLM (as it works for smaller samples, I hope I am not too far off) or what could explain the huge discrepancy of experience.
Programming language / stack plays plays a big role, I presume.
Fair enough. Still, I was out of luck for some fairly simple SQL statements, were the model knows 100% of the DDL statements.
This comment section really shows the stark divide between people who love coding and thus hate AI, and people who hate coding and thus love AI.
Honestly, I suspect the people who would prefer to have someone or something else do their coding, are probably the devs who are already outputting the worst code right now.
I love coding but I also love AI.
I don't know if I'm a minority but I'd like to think there are a lot of folks like me out there.
You can compare it to someone who is writing assembly code and now they've been introduced to C. They were happy writing assembly but now they're thrilled they can write things more quickly.
Sure, AI could lead us to write buggier code. Sure, AI could make us dumber because we just have AI write things we don't understand. But neither has to be the case.
With better tools, we'll be able to do more ambitious things.
I think there are a lot of us, but the people who dislike AI are much more vocal in online conversations about it.
(The hype merchant, LinkedIn influencer, Twitter thread crowd are super noisy but tend to stick to their own echo chambers, it's rare to have them engage in a forum like Hacker News directly.)
> I don't know if I'm a minority
No, there's plenty of top-class engineers who love coding with AI. e.g. Antirez.
I love AI as a concept.
I hate the reality of our current AI, which is benefitting corporations over workers, being used for surveillance and censorship (nevermind direct social control via misinformation bots), and is copying the work of millions without compensating them in order to do it.
And the push for coders to use it to increase their output, will likely just end up meaning expectations of more LoC and more features faster, for the same pay.
But FOSS, self-hosted LLMs? Awesome!
How is using Claude over Llama benefitting corporations over workers? I work with AI every day and sum total of my token spend across all providers is less than a single NVidia H100 card I'd have to buy (from a pretty big corporation!), at the very least, for comparable purpose?
How are self-hosted LLMs not copying the work of millions without compensating them for it?
How is the push for more productivity through better technology somehow bad?
I am pro FOSS but can't understand this comment.
Right, just how back in the day, people who loved writing assembly hated high level languages and people who found assembly too tedious loved compilers.
First of all, Lisp, Fortran and COBOL had been around most of the time when assembly was popular. Assembly was used because of resource constraints.
Secondly, you are not writing anything you get from an LLM. You prompt it and it spits out other people's code, stripped of attribution.
This is what children do: Ask someone to fix something for you without understanding the result.
Good artists copy, great artists steal
Picasso (if he really said that) had a machine painting for him?
Picasso explicitly wanted his designs (for cutlery, plates, household items he designed) to be mass-produced, so your question is not as straightforward as you make it to be.
What is the connection to machine generated code? He designed the items manually and mass produced them.
No one objects to a human writing code and selling copies.
Apart from that, this is the commercial Picasso who loved money. His early pre-expressionist paintings are godlike in execution, even if someone else has painted a Pierrot before him.
Picasso also never said this, and this quote is about ideas, not automation.
I very much understand the result of code that it writes. But I have never gotten paid to code. I get paid to use my knowledge of computers and the industry to save the company money or to make the company money.
Do you feel the same way when you delegate assignments to more junior developers and they come back with code?
It’s almost like there’s a big range of different comprehension styles among human beings, and a varying set of preferences that go with those.
Cant one enjoy both? After all, coding with AI in practice is still coding, just with a far higher intensity.
It is absolutely possible to enjoy both- I have used LLMs to generate code for ideas about alternate paths to take when I write my code- but prompt generation is not coding, and there are WAY too many people who claim to be coding when they have in fact done nothing of the sort.
> a far higher intensity
I'm not sure what this is supposed to mean. The code that I've gotten is riddled with mistakes and fabrications. If I were to use it directly, it would significantly slow my pace. Likewise, when I use LLMs to offer alternative methods to accomplish something, I have to take the time to sit down and understand what they're proposing, how to actually make it work, and whether that route(s) would be better than my original idea. That is a significant speed reduction.
The only way I can imagine LLMs resulting in "far higher intensity" is if I was just yolo'ing the code into my program, and then doing frantic integration, correction, and bugfix work afterwards.
Sure, that's "higher intensity", but that's just working harder and not smarter.
[dead]
It is not coding the same way riding a bus is not driving
You may get to the same destination, but it is not the same activity
What if I prefer to have a clone of me doing my coding, and then I throw my clone under the bus and start to (angrily) hyperfocus explore and change every piece to be beautiful? Does this mean I love coding or I hate coding?
It's definitely a personality thing, but that's so much more productive for me, than convincing myself to do all the work from scratch after I had a design.
I guess this means I hate coding, and I only love the dopamine from designing and polishing my work instead of making things work. I'm not sure though, this feels like the opposite of hate coding.
If you create a sufficiently absurd hypothetical, anything is possible.
Or are you calling an LLM a "clone" of you? In that case, it's more, "if you create a flawed enough starting premise, anything is possible".
> flawed enough starting premise
That's where we start to disagree what future looks like, then.
It's not there yet, in that the LLM-clone isn't good enough. But amusingly a not nearly good enough clone of me already made me more productive, in that I'm able to deliver more while maintaining the same level of personal satisfaction with my code.
The question of increasing productivity and what that means for us as laborers is another entire can of worms, but that aside, I have never yet found LLM-gen'd code that met my personal standards, and sped up my total code output.
If I want to spend my time refactoring and bugfixing and rewriting and integrating, rather than writing from scratch and bugfixing, I can definitely achieve that by using LLM code, but the overall time has never felt different to me, and in many cases I've thrown out the LLM code after several hours due to either sheer frustration with how it's written, or due to discovering that the structure it's using doesn't work with the rest of the program (see: anything related to threading).
I replaced "code" for "singing" to make a point.
> This comment section really shows the stark divide between people who love singing and thus hate AI-assisted singing, and people who hate singing and thus love AI-assisted singing.
> Honestly, I suspect the people who would prefer to have someone or something else do their singing, are probably the singers who are already outputting the worst singing right now.
The point is: just because you love something, doesn't mean you're good at it. It is of course positively correlated with it. I am in fact a better singer because I love to sing compared to if I never practiced. But I am not a good singer, I am mediocre at best (I chose this example for a reason, I love singing as well as coding! :-D)
And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
I think the analogy/ substitution falls apart in that singing is generally not very stable or lucrative (for 99.999% of singers), so it is pretty rare to find someone singing who hates it. Much less uncommon to find people working in IT who hate the specific work of their jobs.
And I think we do tend to (rightfully) look down on e.g. singers who lip-sync concerts or use autotune to sing at pitches they otherwise can't, nevermind how we'd react if one used AI singing instead of themselves.
Yes, loving something is no guarantee of skill at it, but hating something is very likely to correspond to not being good at it, since skills take time and dedication to hone. Being bad at something is the default state.
I have been working in IT for 5 years while being a professional musician for 8 years (in France and touring in Europe). I've never met a single singer who told me they hate singing, on other hand, I can't even count how many of my colleagues told me how much they hate coding.
Another analogy would be with sound engineering. I've met sound engineer who hate their job as they would rather play music. They are also the ones whose jobs are likely to be replaced by AI. And I would argue that the argument stand stills. AI Sound Engineers who hate working on sound are often the bad sound engineers.
> I think the analogy/ substitution falls apart in that singing is generally not very stable or lucrative (for 99.999% of singers), so it is pretty rare to find someone singing who hates it.
I tried to cover this particular case with:
> And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
---
> Yes, loving something is no guarantee of skill at it, but hating something is very likely to correspond to not being good at it, since skills take time and dedication to hone. Being bad at something is the default state.
I tried to cover this particular case with:
> It is of course positively correlated with it.
---
> Being bad at something is the default state.
Well, skill-wise yes. But being talented at something can happen, even when you hate something.
> And I think we do tend to (rightfully) look down on e.g. singers who lip-sync concerts or use autotune to sing at pitches they otherwise can't, nevermind how we'd react if one used AI singing instead of themselves.
Autotune is de rigueur for popular music.
In general, I'm not sure that I agree with looking down on people.
Looking down on someone for actions they choose to take, versus for intrinsic characteristics of who they are, are very different things.
I love coding - but I am not very good at it. I can describe what I want in great detail, with great specificity. But I am not personally very good at turning that detailed specification into the proper syntax and incantations.
AI is like jet fuel for me. It’s the translation layer between specs and code I’ve always wanted. It’s a great advisor for implementation strategies. It’s a way to try new strategies in code quickly.
I don’t need to get anyone else to review my code. Most of this is for personal projects.
I don’t really write professionally, so I don’t have a ton of need for it to manage realities of software engineering (large codebases, peer reviews, black box internal systems, etc). That being said - I do a reasonable amount of embedded Linux work, and AI understands the Linux kernel and device drivers very well.
To extend your metaphor: AI is like a magic microphone that makes all of my singing sound like Tony Rice, my personal favorite singer. I’ve always wanted to sound like him - but I never will. I don’t have the range or the training. But AI allows my coding level to get to that corresponding level with writing software.
I absolutely love it.
This is really interesting to me.
Do you love coding, or do you love creating programs?
It seems like the latter given your metaphor being a microphone to make you seem like you could sing well, i.e. wanting the end state itself rather than the achievement via the process.
"wanted to sound like him" vs "wanted to sing like him"
I very much like creating programs.
The code is a tool. Nothing more.
I love the shed I built for my family. I don’t have a single feeling for the hammer I used to build it.
For the record: I can sing well. I just can’t sound like Tony Rice. I don’t have his vocal cords or training.
I enjoy using tools to create, very much so. The process is fun to me. The thing I create is a record of the process/ work that went into it. Planning and making a cut with a circular saw feels good. Rattling a spray paint can is exciting.
I made a cyber deck several months back, and I opted to carve the case from wood rather than 3d printing or using a premade shell. That hands-on work is something I'm proud of. I don't even use the deck much, it was for the love of building one.
To be fair, I don't have any problem with people who do their jobs for the paycheck alone, because that's the world capitalism has forced us into. Companies don't care about or reward you for the skills you possess, only how much money you make them (and they won't compensate you properly for it, either), so there's no advantage to tying your self-worth up in what you produce for them.
But I do think that it's sad we're seeing creative skills, whether writing coding composing or drawing, be devalued by AI as we are.
> For the record: I can sing well.
That is awesome! It's a great skill to have, honestly. As someone whose body tends to be falling apart more than impressing anyone, I envy that. :)
yeah i definitely enjoy the craft and love of writing boilerplate or manually correcting simple errors or looking up functions /s. i hate how it's even divided into "two camps", it's more like a big venn diagram.
Who write boilerplate this day? I just lift the code from the examples in the docs (especially css frameworks). And I love looking at functions docs, because after doing it a few times, you develop an holistic understanding of the library and your speed increases. Kinda like learning a foreign language. You can use an app to translate everything, or asks for the correct word when the needs arises. The latter is a bit frustrating at the beginning, but that’s the only way to become fluent.
Seriously, I see this claim thrown around as though everyone writes the same starting template 50 times a week. Like, if you've got a piece of "boilerplate" code you're constantly rewriting... Save It! Put it in a repo or a snippet somewhere that you can just copy-paste when you need it.
You don't need a multi-million dollar LLM to give you slightly different boilerplate snippets when you already have a text editor on your computer to save them.
i think everyone here has extremely different ideas of what AI coding actually is and it's frustrating because basically everyone is strawmanning (myself included probably), as if using it means i'm not looking at documentation or not understanding what is goin on at all times.
it's not about having the LLM write some "starter pack" toy scaffold. i means when i implement functionality across different classes and need to package that up and adapt, i can just tell the LLM how to approach it and it can produce entire sections of code that would literally just be adaptations of certain things. or to refactor certain pieces that would just be me re-arranging shit.
maybe that's not "boilerplate", but to me it's a collosal waste of my time that could be spent trying to solve a new problem. you can't package that up into a "code snippet" and it's not worth the time carefully crafting templates. LLM can do it faster, better, and cost me near nothing.
> it's a collosal waste of my time
> LLM can do it faster, better, and cost me near nothing.
And this is one the thing I'm skeptical about. The above use case is a symptom of all code and no design. It is a waste of time because you're putting yourself in a corner, architecture wise. Kinda like building on a crooked foundation.
I've never done refactoring where I'm writing a lot of code, it's mostly just copy-paste and rebuilding the connection between modules (functions, classes, files, packages,...). And if the previous connections were well understood and you have a clear plan for the new design, then it's a no-brainer to get there. Same when adapting code, I'm mostly deleting lines and renaming variables (regex is nice).
Maybe I'm misunderstanding things, but unless it's for small scripts or very common project types, I haven't seen the supposed productivity gain compared to traditional tooling.
yes, that is one aspect of it.
1) refactoring. copy paste, re-arrange, extract, delete and rebuild the connection. i have the mental model and tell the LLM do do it across multiple files or classes. does it way faster and exactly how i would do it given the right prompt which is just a huge file that dictates how things are structured, style, weird edge cases i encountered as time goes on.
2) new features or sections based on existing. i have a service class and want to duplicate and wire up different sections across domains. not easy enough to just be templated, but LLM can do it and understand the nuances. again, generate multiple files across classes no problem.
i can do all these things "fast". i can do them even faster when using the LLM, it offloads the tediousness and i save my brian for other tasks. alot of times i'm just researching my next thing while it chugs away. i come back, lint and review and i'm good to go.
i'm honestly still writing the majority of the code myself, esp if it's like design stuff or new features where the requirements and direction aren't as clear, but when i need to it gives me a huge boost.
keeps me in the flow, i basically recharge while continuing to code. and it's not a small script but a full fledged app, albeit very straightforward architecture wise. the gains are very real. i'm just surprised at the sentiment on HN around it. it's not even just skepticism but outright dogging on it.
I love this detailed discussion of how people are actually using LLMs for coding, and I think this rarely happens in professional spaces currently.
I do see more people who seem to be using it to replace coding skill rather than augment it, and I do worry about management's ability to differentiate between those versus just reverting to LoC. And whether it will become a demand for more code, for the same pay.
Maybe it's different mindset at play. Refactoring these is my way of recharging (because I approach as a nice puzzle to learn how to do it effectively, kinda like a break from the main problem). And the LLM workflow don't sit well with me because I dislike checking every line of generated code. Traditional tooling is deterministic, so I do the check once and move on.
Maybe all code is boilerplate for them? I use libraries and frameworks exactly for the truly boilerplate part. But I still try to understand those code I depends on, as some times I want to deviate from the defaults. Or the bug might be in there.
It’s when you try to use an exotic language, you realize the amount of work that has been done to minimize dev time in more mainstream languages.
Every PR I have to review with an obviously LLM-generated title stuffed with adjectives, and a useless description containing an inaccurate summary of the code changes pushes me a little bit more into trying to make my side projects profitable in the hope that one takes off. It usually only gets worse from there.
Documentation needs to be by humans for humans, it's not a box that's there to be filled with slop.
> The actual documentation needs to be by humans for humans.
This is true for producing the documentation but if there is an LLM that can take said documentation and answer questions about it is a great tool. I think I get the answer far quicker with LLM than sifting through documentation when looking for existence of a function in a library or a property on an object.
The documentation are for answering your questions, it’s not a puzzle to be solved. Using the reference docs assumes that you already have an understanding about the thing that is being documented and you’re looking for specificity or details. If not, the correct move is to go through a book, a tutorial, or the user guide. Aka the introductory materials.
seeing a lot of `const thing = doThing(); // add this line` showing up lately too.
See my reply to another comment - I don't think the divide is as stark as you claim.
(And I don't enjoy the value judgement)
I think that comment is conflating 2 different things: 1) people like you and I who use LLMs for exploring alternative methods to our own, and 2) people who treat LLMs like Stack Overflow answers they don't understand but trust because it's on SO.
Yes, there are tasks or parts of the code that I'm less interested in, and would happily either delegate or negotiate-off to someone else, but I wouldn't give those to a writer who happens to be able to write in a way that approximates program code, I'd give them to another dev on the team. A junior dev gets junior dev tasks, not tasks that they don't have the skills to actually perform, and LLMs aren't even at an actual junior dev level, imhe.
I noted in another comment that I've also used LLMs to get ideas for alternate ways to implement something, or to as you said "jump start" new files or programs. I'm usually not actually pasting that code into my IDE, though- I've tried that, and the work to make LLM-generated code fit into my programs is way more than just writing out the bits I need, where I need. That is clearly not the case for a lot of people using LLMs, though.
I've seen devs submitting PRs with giant blocks of clearly LLM-gen'd code, that they've tried to monkey-wrench into working with the rest of the program (often breaking conventions or secure coding standards). And these aren't junior devs, they're devs that have been working here for years and years.
When you force them to do a code review, they know it's not up to par, but there is a weird attitude that LLM-gen'd code is more acceptable to be submitted with issues than personally-written code. As though it's the LLM's fault or job to fix, even though they prompted and copied and badly-integrated and PR'd it.
And that's where I think there is a stark divide. I think you're on my side of the divide (at least, I didn't get the impression that you hate coding), it just sounds like you haven't really seen the other side.
My personal dime-store psych theory is that it's the same mental mechanism that non-technical computer users fall into of improperly trusting/ believing computers to produce correct information, but now happening to otherwise technical folks too because "AI" is still a black box technology to most of us, like computers in general are to non-techies.
LLMs are really really cool, and really really impressive to me, and I've had 'wow' moments where they did something that makes you forget what they are and how they work, but you can't let that emotional reaction towards it override the part that knows it's just a token chain. When you do, people end up (obviously on the very extreme end) 'dating' them, letting them make consequential "decisions", or just over-trusting their output/code.
I like solving problems but I hate coding. Wasting 20 minutes because you forgot a semicolon or something is not fun. AI let's me focus on the problem and not bother with the tedious coding bit.
That comment makes me deeply suspicious about your debugging skills. And the formatting of your code.
I write code to solve problems for my own use or for my hobby electronics projects. Asking chatgpt to write a script is faster than reading the documentation of some python library.
Just last week it wrote me a whole application and gui to open a webpage at a specific time. Yeah it breaks after the first trigger but it works for what I need.
And that's OK! I'm not trying to gatekeeping anyone from the title of coder or programmer. But what is fine for quick small scripts and throwaway code can be quite bad even for smallish projects. If you're trying to solve a problem in a systematic way, there's a lot of concerns that pertain to the durability of the solution.
There's a lot of literature about these concerns and a lot of methodologies to alleviate them. I (and others) are judging LLMs in light of those concerns. Mostly because speed was never an issue for us in prototypes and scripts (and it can be relaxing to learn about something while scripting it). The issue is always reliability (can it do what I want) and maintainability (can I change it later). Performance can also be a key issue.
Aside: I don't know the exact problem you were solving, but based on the description, that could have been done with systemd timers (macOS services are more of a pain to write). Yes, there's more to learn, but time triggering some command is a problem solved (and systemd has a lot more triggers).
This doesn't even make sense, forgetting a semicolon is immediately caught by the compiler. What positive benefits does AI provide here?
It depends on the language. Javascript is fine without semicolons until it isn't. Of course, a linter will solve this more reliably than AI.
I started “coding” in 1986 in assembly on an Apple //e and by the time I graduated from college, I had experience with 4 different processor families - 65C02, 68K, PPC and x86. I spent the first 15 years of my career programming in C and C++ along with other languages.
Coding is just a means to an end - creating enough business value to convince the company I’m working for to give me money that I can exchange for food and shelter.
If AI can help me do that faster, I’m going to use it. Neither do I want to spend months procuring hardware and managing building out a server room (been there done that) when I can just submit some yaml/HCL and have it done for me in a few minutes.
I love coding and don't love questioning AI and checking responses.
But the simple fact is I'm much more productive with AI and I believe this is likely true for most programmers once they get adjusted.
So for production, what I love the most doesn't really matter, otherwise I'd be growing tomatoes and guiding river rafting expeditions. I'm resigned to the fact the age of manually writing "for loops" is largely over, at least in my case.
If devs would learn how to document their work properly then there'd be much less use for AI and more people who enjoyed coding.
>Honestly, I suspect the people who would prefer to have someone or something else do their coding
Alright, please stop using SDK's, google, stackoverflow, any system libraries. You prefer to do it for yourself right?
If you're using those things to do *the core function* of the program you're writing, that's an issue.
SDKs and libraries are there to provide common (as in, used repeatedly, by many) functions that serve as BUILDING BLOCKS.
If you import a library and now your program is complete, then you didn't actually make a useful program, you just made a likely less efficient interface for the library.
BUT ALSO-
SDKs and libraries are *vetted* code. The advantage you are getting isn't just about it having been written for you, it's about the hundreds of hours of human code review, iteration, and thought, that goes into those libraries.
LLM code doesn't have that, so it's not about you benefitting from the knowledge and experience of others, it's purely about reducing personally-typed LoC.
And yes, if you're wholesale copy-pasting major portions of your program from stack overflow, I'd say that's about as bad as copy-pasting from ChatGPT.
Do you typically find reductio ad absurdum arguments to be persuasive?
If there’s an SDK that implements exactly the product you’re trying to build, then you’re just selling someone else’s SDK.
Exactly, thank you!
> Honestly, I suspect the people who would prefer to have someone or something else do their coding
Have we forgotten that we advanced in software by building on the work of others?
They are not building on the work of others, they are taking the laundered work of others.
I can guess your background (and probably age) from this comment
Finishing sentences with a full stop would put me above 30, yes.
EDIT: incidentally, Suchir Balaji was 26 when he held those views.
[dead]
No, problem is when others are no longer needed, a machine gets to do everything, and only a few selected humans get to take care of the replicator machine.
This belies the way that LLM code is being used.
People aren't taking LLM code and then thoughtfully refactoring and improving it, they're using it to *avoid* doing that, by treating the generated code as though it's already had that done.
That's why the pro-LLM-code people in this very thread are talking about using it to automate away the parts of the coding they don't like. You really think they're then going to go back and improve on the code past it minimally working?
There will be no advancement from that, just mediocre or bad code going unreviewed and ignored until it breaks.
I spend most of my time fixing that shit.
After all, if we lose the joy in our craft, what exactly are we optimizing for?
Solving problems for real people. Isn't the answer here kind of obvious?
Our field has a whole ethos of open-source side projects people do for love and enjoyment. In the same way that you might spend your weekends in a basement woodworking shop without furnishing your entire house by hand, I think the craft of programming will be just fine.
Same as when higher-level languages replaced assembly for a lot of use cases. And btw, at least in places I've worked, better traditional tooling would replace a lot more headcount than AI would.
Not even close, those were all deterministic, this is probabilistic.
The output of the LLM is probabilistic. The code you actually commit or merge is not.
The parent is saying that when higher-level languages replaced assembly languages you only had to learn the higher level language. Once you learned the higher level language the machine did precisely what you specified and you did not have to inspect the assembly language to make sure it was compliant. Furthermore you were forced to be precise and to understand what you were doing when you were writing the higher level language.
Now you don't really have to be precise at any level to get something 'working'. You may not be familiar with the generated language or libaries but it could look good enough (like the assembly would have looked good enough). So, sure, if you are very familiar with the generated language and libraries and you inspect every line of generated code then maybe you will be ok. But often the reason you are using an LLM is because e.g. you don't understand or use bash frequently enough to get it to do what you want. Well, the LLM doesn't understand it either. So that weird bash construct that it emitted - did you read the documentation for it? You might have if you had to write it yourself.
In the end there could be code in there that nothing (machine or human) understands. The less hard-won experience you have with the target and the more time-pressed you are the more likely it is that this will occur.
Exactly. If LLMs were like higher level languages you'd be committing the prompt. LLMs are actually like auto-complete, snippets, stackoverflow and rosetta code. It's not a higher level of abstraction, it's a tool for writing code.
i'm just vibing though, maybe i merge, maybe i don't, based on the vibes
That sounds like a lot of work, better ask a LLM whether to merge.
It does the PR for me, too
And argues on behalf of you in the PR. "You're absolutely right, this should not be merged."
Yes.
The output of the LLM is determined by the weights (parameters of the artificial neural network) estimated in the training as well as a pseudo-random number generator (unless its influence, called "temperature", is set to 0).
That means LLMs behave as "processes" rather than algorithms, unlike any code that may be generated from them, which is algorithmic (unless instrcuted otherwise; you could also tell an LLM to generate an LLM).
The code that the compiler generates, especially in the C realm, or with dynamic compilers is also not regular, hence the tooling constraints in high integrity computing environments.
So what? I know most compilers are deterministic, but it really only matters for reproducible builds, not that you're actually going to reason about the output. And the language makes few guarantees about the resulting instructions.
Yet the words you chose to use in this comment were entirely modelled inside your brain in a not so different manner.
I already see this happening with low code, SaaS and MACH architectures.
What used to be a project doing a CMS backend, now is spent doing configurations on a SaaS product, and if we are lucky, a few containers/serveless for integrations.
There are already AI based products that can automate those integrations if given enough data samples.
Many believe AI will keep using current programming languages as translation step, just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around.
> just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around
Confused by what you mean. Is this not the case?
No, only primitive UNIX toolchains still do this, most modern compilers generate machine code directly, without having to generate Assembly text files and executing the Assembler process on it.
You can naturally revert to old ways, by asking for the Assembly manually, and call the Assembler yourself.
> Solving problems for real people. Isn't the answer here kind of obvious?
No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.
Presumably, the reason for choosing software development as the method of solving problems for people is because software development itself brings joy. Different people find joy in different aspects even of that, though.
For my part, the stuff that AI is promising to automate away is much of the stuff that I enjoy about software development. If I don't get to do that, that would turn my career into miserable drudgery.
Perhaps that's the future, though. I hope not, but if it is, then I need to face up to the truth that there is no role for me in the industry anymore. That would pretty much be a life crisis, as I'd have to find and train for something else.
"There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method."
Software development is almost unique in the scale that it operates at. I can write code once and have it solve problems for dozens, hundreds, thousands or even millions of people.
If you want your work to solve problems for large numbers of people I have trouble thinking of any other form of work that's this accessible but allows you to help this many others.
Fields like civil engineering are a lot harder to break into!
> That would pretty much be a life crisis, as I'd have to find and train for something else.
There's inertia in the industry. It's not like what you're describing could happen in the blink of an eye. You may well be at the end of your career when this prophecy is fulfilled, if it ever comes true. I sure will be at the end of mine and I'll probably work for at least another 20 years.
The inertia argument is real, and I would compare it to the mistaken believe of some at IBM in the 1970s that SQL would be used by managers to query relational databases directly, so no programming was needed anymore.
And what happened? Programmers make the queries and embed them into code that creates dashboards that managers look at. Or managers ask analysts who have to interpret the dashboards for them... It rather created a need for more programmers.
Compare embedded SQL with prompts - SQL queries compared to assembler or FORTRAN code is closer to English prose for sure. Did it take some fun away? Perhaps, if manually traversing a network database is fun to anyone, instead of declaratively specifying what set of data to retrieve. But it sure gave new fun to people who wanted to see results faster (let's call them "designers" rather than "coders"), and it made programming more elegant due to the declarativity of SQL queries (although that is cancelled out again by the ugliness of mixing two languages in the code).
Maybe the question is: Does LLM-based coding enable a new kind of higher level "design flow" to replace "coding flow"? (Maybe it will make a slightly different group of people happy?)
This echoes my sentiment that LLMs are higher level programming languages. And, as every layer of abstraction, they add assumptions that may or may not fit the use case. The same way we optimize SQL queries by knowing how the database makes a query plan, we need to optimize LLM outputs, specially when the assumptions given are not ideal.
> No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.
I don't see why we should seek an explanation if there are thousands of ways to be useful to people. Is being a lawyer particularly better than being an accountant?
I'm probably just not as smart or creative as you, but say my problem is I have a ski cabin that I want to rent it to strangers for money. Nevermind a thousand, What are 100 ways without using software that I could do something about that, vs listing it on Airbnb?
I was speaking about solving people's problems generally. It's easy to find specific problems that are best addressed with software, just as it's easy to find specific problems that can't be addressed with software.
solving real problems is the core of it, but for a lot of people the joy and meaning come from how they solve them too. the shift to AI tools might feel like outsourcing the interesting part, even if the outcome is still useful. side projects will stick around for sure, but i think it's fair to ask what the day-to-day feels like when more of it becomes reviewing and prompting rather than building.
> Solving problems for real people. Isn't the answer here kind of obvious?
Look at the majority of the tech sector for the last ten years or so and tell me this answer again.
Like I guess this is kind of true, if "problems for real people" equals "compensating for inefficiencies in our system for people with money" and "solutions" equals "making a poor person do it for them and paying them as little as legally possible."
Those of us who write software professionally are literally in a field premised on automating other people's jobs away. There is no profession with less claim to the moral high ground of worker rights than ours.
I often think about the savage job-destroying nature of the open source community: hundreds of thousands of developers working tirelessly to unemploy as many of their peers as possible by giving away the code they've written for free.
(Interesting how people talk about AI destroying programming jobs all the time, but rarely mention the impact of billions of dollars of code being given away.)
Would vim or python be created by a company? It’s hard to see how they take jobs away
Open source software is not just different in the license, it’s different in the design
Linux also doesn’t take jobs away - the majority of contributors are paid by companies, afaik
Right: that's the point. Open source has created millions of jobs by increasing the value that individual software developers can provide.
> Those of us who write software professionally are literally in a field premised on automating other people's jobs away.
How true that is depends on what sort of software you write. Very little of what I've accomplished in my career can be fairly described as "automating other people's jobs away".
"Ten year contract you say?"
"Yes, yes... Satellites stay in orbit for a while. What about it?"
"Looks a bit cramped in there."
"Stop complaining, at least it's a real job, now get in, we're about to launch."
Speak for yourself.
I've worked in a medical space writing software so that people can automate away the job that their bodies used to do before they broke.
You're automating the 1's and 0's. There could be millions of people in an assembly like line of buttons, being paid minimum wage to press either the 1 or 0 button to eventually trigger the next operation.
Now all those jobs are gone because of you.
Bit of a tangent but...
Haven't we been automating jobs away since the industrial revolution? I know AI may be an exception to this trend, but at least with classical programming, demand goes up, GDP per capita goes up, and new industries are born.
I mean, there's three ways to get stuff done: do it yourself, get someone else to do it, or get a machine to do it.
#2 doesn't scale, since someone still has to do it. If we want every person to not be required to do it (washing, growing food, etc), #3 is the only way forward. Automation and specialization have made the unthinkable possible for an average person. We've a long way to go, but I don't see automation as a fundamentally bad thing, as long as there's a simultaneous effort to help (especially those who are poor) transition to a new form of working.
We have always automated, because we can.
What is qualitatively different this time is that it affects intellectual abilities - there is nothing higher up in the work "food chain". Replacing physical work you could always argue you'd have time to focus on making decisions. Replacing decision making might mean telling people go sit on the beach and take your universal basic income (UBI) cheque, we don't need you anymore.
Sitting on the beach is not as nice as it sounds for some; if you don't agree, try doing it for 5 years. Most people require work to have some sense of purpose, it gives identity, and it structures their time.
Furthermore, if you replaced lorry drivers with self-driving cars, you'd destroy the most commonly held job in North America as well as South America, and don't tell me that they can be retrained to be AI engineers or social media influencers instead (some can only be on the road, some only want to be on the road).
I agree that we have been able to automate a lot of jobs, but it's not like intellectual jobs have completely replaced physical labor. Electricians, phlebotomists, linemen, firefighters, caregivers, etc, etc, are jobs that current AI approaches don't even scratch. I mean, Boston dynamics has barely been able to get a robot to walk.
So no, we don't need to retrain them to be AI engineers if we have an active shortage of electricians and plumbers. Now, perhaps there aren't enough jobs—I haven't looked at exact numbers—but we still have a long ways to go before I think everything is automated.
Everything being slop seems to be the much more likely issue in my eyes[1].
[1] https://titotal.substack.com/p/slopworld-2035-the-dangers-of...
> as long as there's a simultaneous effort to help (especially those who are poor) transition to a new form of working.
Somehow everyone who says this misses that never in the history of the United States (and most other countries tbh) has this been true.
We just consign people to the streets in industrial quantity. More underserved to act as the lubricant for capitalism.
But... My local library has a job searching program? I have a friend who's learning masonry at a government sponsored training program? It seems the issue is not that resources don't exist, but that these people don't have the time to use them. So it's unfair to say they don't exist. Rather, it seems they're structured in an unhelpful way for those who are working double jobs, etc.
I see capitalism invoked as a "boogey man" a lot, which fair enough, you can make an emotional argument, but it's not specific enough to actually be helpful in coming up with a solution to help these people.
In fact, capitalism has been the exact thing that has lifted so many out of poverty. Things can be simultaneously bad and also have gotten better over time.
I would argue that the biggest issue is education, but that's another tangent...
> So it's unfair to say they don't exist. Rather, it seems they're structured in an unhelpful way for those who are working double jobs, etc.
I'll be sure to alert the next person I encounter working UberEats for slave wages that the resources exist that they cannot use. I'm sure this difference will impact their lives greatly.
Edit: My point isn't that UberEats drivers make slave wages (though they do): My point is that from the POV of said people and others who need the aforementioned resources, whether they don't exist or exist and are unusable is fucking irrelevant.
Slave wages? Like the wages for a factory worker in 1918[1]? $1300 after adjusting for inflation. And that was gruelling work from dawn to dusk, being locked into a building, and nickel and dimed by factory managers. (See the triangle shirtwaist factory). The average Uber wage is $20/hour[2]. Say they use 2 gallons of gas (60 mph at 30 mpg) at $5/gallon. That comes out to $10/hour, which is not great, but they're not being locked into factories and working from dawn to dusk and being fired when sick. Can you not see that this is progress? It's not great, we have a lot of progress to make, but it sure beats starving to death in a potato famine.
[1] https://babel.hathitrust.org/cgi/pt?id=mdp.39015022383221&se...
[2] https://www.indeed.com/cmp/Uber/salaries/Driver (select United States as location)
> Slave wages? Like the wages for a factory worker in 1918[1]? $1300 after adjusting for inflation.
I think they were using “slave wages” as a non-literal relative term to the era.
As you did.
A hundred years before your example, the “slave wages” were actually slave wages.
I think it’s fair to say a lot of gig workers, especially those with families, are having a very difficult time economically.
I expect gig jobs lower unemployment substantially, due to being convenient and easy to get, and potentially flexible with hours, but they lower average employment compensation.
> I think it’s fair to say a lot of gig workers, especially those with families, are having a very difficult time economically.
Great point. I wonder if this has to do with the current housing crisis and cost of utilities... Food has never been more affordable, in fact free with food banks and soup kitchens. But (IMHO) onerous zoning has really slowed down development and driven up prices.
Another cost is it's pretty much impossible to do anything without a smartphone and internet. I suppose libraries have free internet, but being able to get to said library is another issue.
And like you said, contract work trades flexibility for benefits, and that gets exploited by these companies.
I guess it just sucks sometimes because these issues are super hairy (shut down Uber, great, now you've just put everyone out of a job). "For every complex problem there is a solution which is clear, simple, and wrong."
Replying to your edit: it is relevant, because it means people are trying but it isn't working. When people aren't trying, you have to get people to start trying. When people are trying but it isn't working, you have to help change the approach. Doubling down on a failing policy (e.g. we just need to create more resources) is failing to learn from the past.
At some point, you've stopped participating in good faith with the thread and are instead trying to push it towards some other topic; in your case, apparently, a moral challenge against Uber. I think we get it; can you stop supplying superficial rebuttals to every point made with "but UberEats employs [contracts] wave slaves"?
> Those of us who write software professionally are literally in a field premised on automating other people's jobs away.
Depends what you write. What I work on isn't about eliminating jobs at all, if anything it creates them. And like, actual, good jobs that people would want, not, again, paying someone below the poverty line $5 to deliver an overpriced burrito across town.
I think most of the time when we tell ourselves this, it's cope. Software is automation. "Computers" used to be people! Literally, people.
> "Computers" used to be people! Literally, people.
Not always. Recruitment budgets have limits, so it's a fixed number of employees either providing services to a larger number of customers thanks to software, or serving fewer customers or do so less often without the software.
https://en.wikipedia.org/wiki/Computer_(occupation)
Thank you for the link, the reference you're making slipped past me. That said, I think my point still holds: software doesn't always have to displace workers, it can also help current employees scale their efforts when bringing on more people isn't possible.
I'm unable and unwilling to shadowbox with what you think I'm actually experiencing.
That's fine; read it as me speaking to the whole thread, not challenging you directly. Technology drives economic productivity; increasing economic productivity generally implies worker displacement. That workers come out ahead in the long run (they have in the past; it's obviously not a guarantee) is besides my point. Software is automating software development away, the same way it automated a huge percentage of (say) law firm billable hours away. We'd better be ready to suck it up!
> That workers come out ahead in the long run (they have in the past...)
Would you mind naming a few instance of the workers coming out ahead?
Sure. Compare the quality of life of the Computers to that of any stably employed person today who owns a computer.
Got it, you're talking about workers getting ahead as a category -- no objections to that.
I doubt the displaced computers managed to find a better job on average. Probably even their kids were disadvantaged since the parents had fewer options to support their education.
So, who knows if this specific group of people and their descendants ever fully recovered let alone got ahead.
My argument is explicitly not premised on the claim that productivity improvements reliably work out to the benefit of existing workers. It's that practicing commercial software developers are agents of economic productivity, whether anticapitalist developers are happy about that or not, and have really no moral standing to complain about their jobs (or the joy in those jobs) being automated away. That's what increased economic productivity means: more getting done with less labor.
Yeah I see it as fair game
Automating jobs away is good for workers. Not bad. Don't you start repeating ignorant socialist nonsense. You are better than that.
> Automating jobs away is good for workers. Not bad.
Sure, if you completely disregard the past 200 years or so of history.
Can't relate at all. I've never had so much fun programming as I have now. All the boring and tedious parts are gone and I can finally focus on the code I love to write.
I don't know man, maybe prompt most of your work, eyeball it and verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!), run a script to commit and push after 3 hours and then... work on whatever code makes you happy without using an LLM?
Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have. I have started deriving a lot of value from those LLMs I chose to interact with by specifying clear boundaries on what's the priority and what can wait for later and what should be completely ignored due to this or that objective (and a number of other parameters I am giving them). When you do that well, they are extremely useful.
I see this "prompting is an art" stuff a lot. I gave Claude a list of 10 <Route> objects and asked it to make an adjustment to all of them. It gave me 9 back. When I asked it to try again it gave me 10 but one didn't work. What's "prompt engineering" there, telling it to try again until it gets it right? I'd rather just do it right the first time.
We used to make fun of and look down on coders who mindlessly copy paste and mash the compile button until the code runs, for good reasons.
Did you skip the "rigorously verify the LLM code" part of my comment on purpose, just to show contempt?
Then don't use it? Nobody is making you.
I am also barely using LLMs at the moment. Even 10% of the time would be generous.
What I was saying is that I have tried different ways of interacting with LLMs and was happy to discover that the way I describe stuff to another senior dev actually works quite fine with an LLM. So I stuck to that.
Again, if an LLM is not up to your task, don't waste your time with it. I am not advocating for "forget everything you knew and just go ask Mr. AI". I am advocating for enabling and productivity-boosting. Some tasks I hate, for some I lack the deeper expertise, others are just verbose and require a ton of typing. If you can prompt the LLM well and vet the code yourself after (something many commenters here deliberately omit so they can happily tear down their straw man) then the LLM will be a net positive.
It's one more tool in the box. That's all there is to it really. No idea why people get so polarizing.
Prompt engineering is just trying that task on a variety of models and prompt variations until you can better understand the syntax needed to get the desired outcome, if the desired outcome can be gotten.
Honestly you’re trying to prove AI is ineffective by telling us it didn’t work with your ineffective protocol. That is not a strong argument.
What should I have done there? Tell it to make sure that it gives me all 10 objects I give it back? Tell it to not put brackets in the wrong place? This is a real question --- what would you have done?
How long ago was this? I'd be surprised to see Claude 3.7 Sonnet make a mistake of this nature.
Either way, when a model starts making dumb mistakes like that these days I start a fresh conversation (to blow away all of the bad tokens in the current one), either with that model or another one.
I often switch from Claude 3.7 Sonnet to o3 or o4-mini these days. I paste in the most recent "good" version of the thing we're working on and prompt from there.
Lol, "it didn't do it... and if it did it didn't mean it... and if it meant it it surely can't mean it now." This is unserious.
A full two thirds of the comment you replied to there were me saying "when these things start to make dumb mistakes here are the steps I take to fix the problem".
this is the rhetoric that you will see replied to effectively any negative experience with LLMs in programming.
You should have dropped the LLM, of course. They are not replacing us the programmers anytime soon. If they can be used as an enabler / booster, cool, if not, back to business as usual. You can only win here. You can't lose.
In no particular order:
* experiment with multiple models, preferably free high quality models like Gemini 2.5. Make sure you're using the right model, usually NOT one of the "mini" varieties even if its marketed for coding.
* experiment with different ways of delivering necessary context. I use repomix to compile a codebase to a text file and upload that file. I've found more integrated tooling like cursor, aider, or copilot, are less effective then dumping a text file into the prompt
* use multi-step workflows like the one described [1] to allow the llm to ask you questions to better understand the task
* similarly use a back-and-forth one-question-at-a-time conversation to have the llm draft the prompt for you
* for this prompt I would focus less on specifying 10 results and more about uploading all necessary modules (like with repomix) and then verifying all 10 were completed. Sometimes the act of over specifying results can corrupt the answer.
[1]: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
I'm a pretty vocal AI-hater, partly because I use it day to day and am more familiar with its shortfalls - and I hate the naive zealotry so many pro-AI people bring to AI discussions. BUTTT we can also be a bit more scientific in our assessments before discarding LLMs - or else we become just like those naive pro-AI-everything zealots.
>most of us would delegate our work code to somebody else or something else if we could.
Laughably narrow-minded projection of your own perspective on others.
We all delegate. Did you knit your own clothes or is that too boring for you?
Enjoying to code/knit is fine but we can no longer expect to get paid well to do it.
Each activity we engage in has different use, value, and subjective enjoyment to different people. Some people love knitting! Personally, I do know how to sew small tears, which is more than most people in the US these days.
Just because I utilize the services of others for some things does not mean that it should be expected I want to utilize the service of others for all things.
This is a preposterous generalization and exactly why I said the OP premise is laughable.
Further, you’ve shifted OP’s point from subjective enjoyment of an activity to getting “paid well” - this is an irrelevant tangent to whether “most” people in general would delegate work if they could.
There is context, that you laughably skipped. You do you.
What context did I skip? It seems like the statement stands on its own.
Obviously my comment was shortened for brevity and it is kind of telling that you couldn't tell and rushed to tear down the straw man that you saw.
Answering your question:
- That there are annoying tasks none of us look forward to doing.
- That sometimes you have knowledge gaps and LLMs serve as a much better search engine.
- That you have a bad day but the task is due tomorrow. Happened to us all.
I am not "laughably projecting on others", no. I am enumerating human traits and work conditions that we all have or had.
OBVIOUSLY I did not mean that I would delegate all my work tomorrow if I could. I actually do love programming.
> most of us would delegate our work code to somebody else or something else if we could.
I saw your objections to other comments on the basis of them seemingly not having a disdainful attitude towards coding they do for work, specifically.
I absolutely do have tasks, coding included, that I don't want to do, and find no joy in. If I can have my manager assign the task to someone else, great! But using an LLM isn't that, so I'm still on the hook for ensuring all the most boring parts of that task (bugfixing, reworks, integration, tests, etc) get done.
My experience with LLMs is that they simply shift the division of time away from coding, and towards all the other bits.
And it can't possibly just be about prompting. How many hundreds of lines of prompting would you need to get an LLM to understand your coding conventions, security baselines, documentation reqs, logging, tests, allowed libraries, OSS license restrictions (i.e. disallowed libraries), etc? Or are you just refactoring for all that afterwards?
Maybe you work somewhere that doesn't require that level of rigor, but that doesn't strike me as a good thing to be entrenching in the industry by increasing coders' reliance on LLMs.
A super necessary context here is that I barely use LLM at all still. Maybe I should have said so but I figured that too much nuance would ruin a top-level comment and mostly casually commented on a tradeoff of using or not using LLMs.
Where I use LLMs:
1. Super boring and annoying tasks. Yes, my prompts for those include various coding style instructions, requests for small clarifying comments where the goal of the code is not obvious, tests. So, no OSS license restrictions. Libraries I specify most of the times I used LLMs (and only once did I ask it to suggest a library). Logging and telemetry I add myself. So long story short, I use the LLM to show me a draft of a solution and then mercilessly refactor it to match my practices and guidelines. I don't do 50 exchanges out of laziness, no.
2. Tasks where my expertise is lacking. I recently used an LLM to help me with making a `.clone()`-heavy Rust code to become nearly zero-copy for performance reasons -- it is a code on a hot path. As much as I love Rust and I am fairly good at it (realistically I'm IMO at 7.5 / 10), all the lifetimes and zero-copy semantics I still don't know yet. A long session with an LLM after, I emerged both better educated and with a faster code. IMO a win-win.
That's interesting, especially wrt the Rust example. I actually like LLMs as reference docs, I just don't trust their code as far as I can throw it.
Thanks for the follow-up!
> Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Hard disagree, I get to hyperfocus on making magical things that surprise and delight me every day.
Nice. I've got a whole lot of magical things that I need built for my day job. Want to connect so I can hand the work over to you? I'll still collect the paychecks, but you can have the joy. :)
I'm disappointed that several of you so easily skipped over the "work" word. It is doing a lot of work in that sentence.
> Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
I don’t think this is the case, if anything the opposite is true. Most of us would like to do the work code but have realized, at some career point, that you’re paid more to abstract yourself away from that and get others to do it either in technical leadership or management.
> I don’t think this is the case, if anything the opposite is true
I'll be a radical and say that I think it depends and is very subjective.
Author above you seems to enjoy working on code by itself. You seem to have a different motivation. My motivation is solving problems I encounter, code just happen to be one way out of many possible ones. The author of the submission article seems to love the craft of programming in itself, maybe the problem itself doesn't even matter. Some people program just for the money, and so on.
Well, does not help that a lot of work tasks are meaningless drudgery that we collectively should have trivialized and 100% automated at least 20 years. That was kind of the core my point: a lot of work tasks are just plain BS.
I wouldn't, I got into software exactly because I enjoy solving problems and writing code. Verifying shitty, mindless, computer generated code is not something I would consider doing for all the money in the world.
1. I work on enjoyable problems after I let the LLM do some of the tasks I have to do for money. The LLM frees me bandwidth for the stuff I truly love. I adore solving problems with code and that's not going to change ever.
2. Some of the modern LLMs generate very impressive code. Variables caching values that are reused several times, utility functions, even closure helpers scoped to a single function. I agree that when the LLM code's quality falls bellow a certain threshold then it's better in every way to just write it yourself instead.
> most of us would delegate our work code to somebody else or something else if we could
Not me. I code because I love to code, and I get paid to do what I love. If that's not you…find a different profession?
Needlessly polarizing. I love coding since 12 years old (so more than 30 years at this point) but most work tasks I'm given are fairly boring and uninteresting and don't move almost any science or knowledge forward.
Delegating part of that to an LLM so I can code the stuff I love is a big win for my motivation and is making me doing the work tasks with a bit more desire and pleasure.
Please don't forget that most of us out there can't code for money anything that their heart wants. If you can, I'd be happy for you (and envious) but please understand that's also a fairly privileged life you'd be having in that case.
The act of coding preserves your skills for that all-important verification step. No coding and the whole system falls apart.
Exactly, how are you supposed to verify anything when you don't have any skills left beyond prompting.
You don't. That's why you don't use an LLM most of the time. I was talking about cases where either the tasks were too boring or required an expertise that I didn't have at the time.
Thought it was obvious.
> or required an expertise that I didn't have at the time
How do you verify code that you don't have the expertise to write on your own?
Good question. I run it by the docs that intimidated me before. Because I did not ask the LLM for the code only; I asked it to fully explain what did it change and why.
Absolutely. That's why I don't give the LLM the reins for long, nor do I tell it to do the whole thing. I want to keep my mind sharp and my abilities honed.
> Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have
It requires magical incantations that may or may not work and where a missing comma in a prompt can break the output just as badly as the US waking up and draining compute resources.
Has nothing to do with eloquence
> "verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!)"
100% this.
I like writing code more than reading it, personally.
Yeah, I think that's pretty common. It took me 15+ years of my own career before I got over my aversion to spending significant amounts of time reading through code that I didn't write myself.
We all do. But more often than not we have to learn to do surgical incisions in order to do our task for the day. It's what truly distinguishes a professional.
Totally. And yet rigorous proof is very difficult. Having done some mathematics involving nontrivial proofs, I respect even more how difficult rigor is.
Ah, I absolutely don't verify code in the mathematical sense of the word. More like utilize strong static typing (or hints / linters in weaker typed languages) and write a lot of tests.
Nothing is truly 100% safe or free of bugs. What I meant with my comment up-thread was that I have enough experience to have a fairly quick and critical eye of code, and that has saved my skin many times.
You have an automation bias. "Surely this thing knows more than me it must be right." and there is no reason to believe that, but you will.
How did you get there from me agreeing 100% with someone who said that you should be ready to verify everything an LLM does for you and if you're not willing to do that you shouldn't use them at all?
Do you ever read my comments, or do you just imagine what I might have said and reply to that?
> work on whatever code makes you happy without using an LLM?
This isn't how it works, psychologically. The whole time I'm manual coding, I'm wondering if it'd be "easier" to start prompting. I keep thinking about a passage from The Road To Wigan Pier where Orwell addresses this effect as it related to the industrial revolution:
>Mechanize the world as fully as it might be mechanized, and whichever way you turn there will be some machine cutting you off from the chance of working—that is, of living.
>At a first glance this might not seem to matter. Why should you not get on with your ‘creative work’ and disregard the machines that would do it for you? But it is not so simple as it sounds. Here am I, working eight hours a day in an insurance office; in my spare time I want to do something ‘creative’, so I choose to do a bit of carpentering—to make myself a table, for instance. Notice that from the very start there is a touch of artificiality about the whole business, for the factories can turn me out a far better table than I can make for myself. But even when I get to work on my table, it is not possible for me to feel towards it as the cabinet-maker of a hundred years ago felt towards his table, still less as Robinson Crusoe felt towards his. For before I start, most of the work has already been done for me by machinery. The tools I use demand the minimum of skill. I can get, for instance, planes which will cut out any moulding; the cabinet-maker of a hundred years ago would have had to do the work with chisel and gouge, which demanded real skill of eye and hand. The boards I buy are ready planed and the legs are ready turned by the lathe. I can even go to the wood-shop and buy all the parts of the table ready-made and only needing to be fitted together; my work being reduced to driving in a few pegs and using a piece of sandpaper. And if this is so at present, in the mechanized future it will be enormously more so. With the tools and materials available then, there will be no possibility of mistake, hence no room for skill. Making a table will be easier and duller than peeling a potato. In such circumstances it is nonsense to talk of ‘creative work’. In any case the arts of the hand (which have got to be transmitted by apprenticeship) would long since have disappeared. Some of them have disappeared already, under the competition of the machine. Look round any country churchyard and see whether you can find a decently-cut tombstone later than 1820. The art, or rather the craft, of stonework has died out so completely that it would take centuries to revive it.
>But it may be said, why not retain the machine and retain ‘creative work’? Why not cultivate anachronisms as a spare-time hobby? Many people have played with this idea; it seems to solve with such beautiful ease the problems set by the machine. The citizen of Utopia, we are told, coming home from his daily two hours of turning a handle in the tomato-canning factory, will deliberately revert to a more primitive way of life and solace his creative instincts with a bit of fretwork, pottery-glazing, or handloom-weaving. And why is this picture an absurdity—as it is, of course? Because of a principle that is not always recognized, though always acted upon: that so long as the machine is there, one is under an obligation to use it. No one draws water from the well when he can turn on the tap. One sees a good illustration of this in the matter of travel. Everyone who has travelled by primitive methods in an undeveloped country knows that the difference between that kind of travel and modern travel in trains, cars, etc., is the difference between life and death. The nomad who walks or rides, with his baggage stowed on a camel or an ox-cart, may suffer every kind of discomfort, but at least he is living while he is travelling; whereas for the passenger in an express train or a luxury liner his journey is an interregnum, a kind of temporary death. And yet so long as the railways exist, one has got to travel by train—or by car or aeroplane. Here am I, forty miles from London. When I want to go up to London why do I not pack my luggage on to a mule and set out on foot, making a two days of it? Because, with the Green Line buses whizzing past me every ten minutes, such a journey would be intolerably irksome. In order that one may enjoy primitive methods of travel, it is necessary that no other method should be available. No human being ever wants to do anything in a more cumbrous way than is necessary. Hence the absurdity of that picture of Utopians saving their souls with fretwork. In a world where everything could be done by machinery, everything would be done by machinery. Deliberately to revert to primitive methods to use archaic took, to put silly little difficulties in your own way, would be a piece of dilettantism, of pretty-pretty arty and craftiness. It would be like solemnly sitting down to eat your dinner with stone implements. Revert to handwork in a machine age, and you are back in Ye Olde Tea Shoppe or the Tudor villa with the sham beams tacked to the wall.
>The tendency of mechanical progress, then, is to frustrate the human need for effort and creation. It makes unnecessary and even impossible the activities of the eye and the hand. The apostle of ‘progress’ will sometimes declare that this does not matter, but you can usually drive him into a comer by pointing out the horrible lengths to which the process can be carried.
sorry it's so long
This article resonates with me like no other has in years. I very recently retired after 40 years writing software because my role had evolved into a production-driven limbo. For the past decade I have scavenged and copied other peoples' code into bland cookie cutter utilities that fed, trained, ran, and summarized data mining ops. It has required not one whit of creative expression or 'flow', making my life's work as dis-engaging as that of... well... the most bland job you can imagine.
AI had nothing to do with my own loss of engagement, though certainly it won't cure what ailed me. In fact, AI promises to do to all of software development what the mechanized data mining process did to my sense of creative self-expression. It will squeeze all the fun out of it, reducing the joy of coding (and its design) to plug-and-chug, rinse, repeat.
IMHO the threat of AI to computer programming is not the loss of jobs. It's the loss of personal passionate engagement in the craft.
I’ve been struggling with a very similar feeling. I too am a manager now. Back in the day there was something very fulfilling about fully understanding and comprehending your solution. I find now with AI tools I don’t need to understand a lot. I find the job much less fulfilling.
The funny thing is I agree with other comments, it is just kind of like a really good stack overflow. It can’t automate the whole job, not even close, and yet I find the tasks that it cannot automate are so much more boring (the ones I end up doing).
I envy the people who say that AI tools free them up to focus on what they care about. I haven’t been able to achieve this building with ai, if anything it feels like my competence has decreased due to the tools. I’m fairly certain I know how to use the tools well, I just think that I don’t enjoy how the job has evolved.
It's 9am in the morning. I login to my workstation and muddle my way through the huge enterprise code base which doesn't fit into any model context window for the AI tool to be useful (and even if it did, we can't use any random model due to compliance and proprietary and whatnot).
I have thousands deadlines which are suddenly coming due and a bunch of code which is broken because some poor soul under the same pressure put something that "works" in. And it worked, until it didn't, and now it's my turn in the barrel.
Is this the joy?
I'm not complaining, I'm doing it for the good money.
When we outsource the parts of programming that used to demand our complete focus and creativity, do we also outsource the opportunity for satisfaction? Can we find the same fulfillment in prompt engineering that we once found in problem-solving through code?
Most of AI-generated programming content I use are comments/explanations for legacy code, closely followed by tailored "getting started" scripts and iterations on visualisation tasks (for shitty school assignments that want my pyplots to look nice). The rest requires an understanding, which AI can help you achieve faster (it's read many a book related to the topic, so it can recall information a lot like an experienced colleague may), but it can't confer capital K Knowledge or understanding upon you. Some of the tasks it performs are grueling, take a lot of time to do manually, and provide little mental stimulation. Some may be described as lobotomizing and (in my opinion) may mentally damage you in the "Jack Torrance typewriter" kinda way.
It makes me able to work on the fun parts of my job which possess the qualities the article applauds.
I always thought about the problem of AI taking jobs, that even if there are new jobs created to replace the older ones, it will come at a cost of decrease in satisfaction of overall populace.
The more people in general get disconnect from nature/physical world/reality. via layers of abstraction the more discontent they will become. These layers can be: 1) Automatics in agriculture. 2) Industries. 3) Electronics 4) Software 5) and now AI
Each higher layer depends on lower ones for its functioning without the need to worry about specifics and provides a framework for higher abstraction to work on.
The more we move up in hierarchy the more disconnected we become from the physical world.
To support this I observed that villagers in general are more jolly and content than city dwellers. In metropolis specially I saw that people are more rude, anxious and always agitated, while villagers are welcoming and peaceful.
Another good example is that of an artist finding it boring to guide AI even though he loves making paintings himself/herself.
I've been singin' this song for years. We should return to Small Data. Hand picked, locally sourced, data. Data I can buy at a mom and pop shop. Data I can smell, data I can feel, data I can yearn for.
Gone are those days.
I'm guessing you're referencing KRAZAM? https://youtu.be/eDr6_cMtfdA
https://m.youtube.com/watch?v=eDr6_cMtfdA
When I coding, most of time was used to search docs over internet. My first language is not english, search over hundrud of pages is quiet slow.
AI help me a lot, you don't need search, just ask AI, and it provide the answer directly. After using AI, I have more time used on coding, more fun.
I am mostly pretty underwhelmed with LLMs' code, but this is a use-case that makes perfect sense to me, and seems like a net-positive: using them as a reference manual/ translator/ training aid.
I just wish I saw more people doing this, rather than asking them to 'draw 80% of the owl'.
There is craft in business, in product, and in engineering.
A lot of these discussions focus on craft in engineering and there's lots of merit there regarding AI tools and how they change that process, but I've found that folks who enjoy both the product side of things and the engineering side of things are thriving while those who were very engineering focused understandably feel apprehensive.
I will say, in my day job, which is often at startups, I have to focus more on the business / product side just given the phase of the company. So, I get joy from engineering craft in side projects or other things I work on in my own time to scratch the itch.
> After all, if we lose the joy in our craft, what exactly are we optimizing for?
For being one of the few lucky ones that gets to stay around taking care of the software factory robots, or designing them, while everyone else that used to work at the factory is now queueing somewhere else.
To me THIS is the most stressful part of the whole thing.
I like programming but I have other hobbies I find fulfilling, and nothing stops me from programming with a pen and paper.
The bad vibes are not caused by lack of programming, they're caused by headsman sharpening his axe behind me.
A few lucky programmers will be elevated to God status and we're all fighting for those spots now.
For me the most surprising part is the phase of wonder, from those that apparently never read anything in the history of industrial revolution, and think everyone will still have a place when we achieve Star Trek replicator level.
Not everyone gets a seat at the starship.
The author is already an experienced programmer. Let me toss in an anecdote about the next generation of programmers. Vibe coding: also called playing pinball with the AI, hoping something useful comes out.
I taught a lecture in my first-semester programming course yesterday. This is in a program for older students, mostly working while going back to school. Each time, a few students are selected to present their code for an exercise that I pick randomly from those they were assigned.
This guy had fancy slides showing his code, but he was basically just reading the code off the page. So I ask him: “hey, that method you call, what exactly does it do?”.
Um…
So I ask "Ok, the result from that method is assigned to a variable. What kind of variable is it?" Note that this is Java, the data type is explicitly declared, so the answer is sitting there on his slide.
Um…
So I tear into him. You got this from ChatGPT. That’s fine, if you need the help, but you need to understand what you get. Otherwise you’ll never get a job in IT.
His answer: “I already have a job in IT.”
Fsck. There is your vibe coder. You really do not want them working on anything that you care about.
This is one of the biggest dangers imo. While I agree with the OP about the deflation of joy in experienced programmers, the related but more consequential effect seems to be dissuading people from learning. A generational threat to collective competence and a disservice to students and teachers everywhere
Does your course not have exams or in-lab assignments? Should sort itself out. Honestly, I'm all for homework fading away as professors can't figure out how to prevent people from using AI. It used to be the case that certain kids could get away with not doing much because they were popular enough to get people to let them copy their assignments (at least for certain subjects). Eventually the system will realize they can't detect AI and everything has to be in-person.
Sure, this guy is likely to fail the course. The point is: he is already working in the field. I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
> I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
That could be considered malpractice. I know our profession currently doesn't have professional standards, but it's just a side effect of it being very new and not yet solidified; it won't be long until some duty of care becomes required, and we're already starting to see some movement in that direction, with things like the EU CRA.
So long as your experience and skill allows you to produce work of higher quality than average for your industry, then you will always have a job which is to review that average quality work, and surgically correct it when it is wrong.
This has always been true in every craft, and it remains true for programmers in a post LLL world.
Most training data is open source code written by novice to average programmers publishing their first attempts at things and thus LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience.
Honestly to most programmers early in their career right now, I would suggest spending more time reviewing code, and bugfixes, than writing code. Review is the skillset the industry needs most now.
But you will need to be above average as a software reviewer to be employable. Go out into FOSSland and find a bunch of CVEs, or contribute perf/stability/compat fixes, proving you review and improve things better than existing automated tools.
Trust me, there are bugs -everywhere- if you know how to look for them and proving you can find them is the resume you need now.
The days of anyone that can rub two HTML tags together having a high paying job are over.
> LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience
The one time i pasted LLM code without reviewing it it belonged on accidentally quadratic.
It was obvious at first read, but probably not for a beginner. The accidental complexity was hidden behind API calls that weren't wrong, just grossly inefficient.
Problem might be, if you lose the "joy" and the "flow" you'll stop caring about things like that. And software is bloated enough already.
I love that quote he led with.
In my case, I couldn't agree more, with the premise of the article, but my life today, is centered around writing software the very best that I can; regardless of value or price.
It's not very effective, if I were to be trying to make a profit.
It's really hard to argue for something, if the something doesn't result in value, as perceived by others.
For me, the value is the process. I often walk away from my work, once I have it up and shipping. I do like to take my work all the way through shipping, support, and maintenance, but find that my eye is always drawn towards new shores[0].
[0] https://littlegreenviper.com/miscellany/thats-not-what-ships...Honestly, most of the "real engineer" rhetoric is exhausting. Here's the thing: the people most obsessed with software craftsmanship, pattern orthodoxy, and layered complexity often create some of the most brittle, hostile, constantly mutating systems imaginable. You may be able to build abstractions, but if you're shipping stuff that users have to re-learn every quarter because someone needed to justify a promotion via another UI revamp or tech stack rewrite, you're not designing well. You're just changing loudly.
Also, stop gatekeeping AI tooling like it’s cheating. We’re not in a craft guild. The software landscape is full of shovelware and half-baked “best practices” that change more often than a JavaScript framework’s logo. I'm not here to honor the tradition of suffering through YAML hell or memorizing the 400 ways to configure a build pipeline. I’m here to make something work well, fast, and that includes leveraging AI like the power tool it is.
So yeah, you can keep polishing the turd pile of over-engineered “real” systems. The rest of us will be using AI to build, test, and ship faster than your weekly stand-up even finishes.
Flow Management
Flow comes when challenge meets skill
Too much skill and too little challenge creates boredom;
too little skill and too much challenge creates anxiety
AI has reduced the challenge needed for achieving your goal, creating boredom
Remedy: find greater challenges?
I will start by saying I don't have much experience with the latest AI coding tools.
From what I've seen using them would lead to more boredom. I like solving problems. I don't like doing code reviews. I wouldn't trust any AI generated code at this stage without reviewing it. If I could swap that around so I write code and AI gives me a reasonable code review and catches my mistakes I'd be much more interested.
I would argue that the vast majority of challenges I have had in my (very long) tech career were not technical challenges anyway, rather they were "people" problems (e.g., extracting the actual requirements and maintaining scope stability).
Would you be happier and feel more flow if you were typing in assembly? What about hand-punching cards? To me this reads more as nostalgia than a genuine concern. Tools are always increasing in abstraction, but there’s no reason you can’t achieve flow with new tools. Learning to prompt is the new learning to type.
I think.. based on recent events.. that some of the corporate inefficiencies are very poorly captured. Last year we had an insane project that was thrown at us before end of the year, because, basically, company had a tiff with the vendor and would rather have us spend our time in meetings trying to do what they are doing rather than pay vendor for that thing. From simple money spent perspective, one would think company's simple amoral compass would be a boon.
AI coding is similar. We just had a minor issue with ai generated code that was clearly not vetted as closely as it should have been making output it generated over a couple of months not as accurate as it should be. Obviously, it had to be corrected, then vetted and so on, because there is always time to correct things...
edit: What I am getting at is the old-fashioned, penny smart, but pound foolish.
My experience has been almost the opposite.
Typing isn't the fun part of it for me. It's a necessary evil to realize a solution.
The fun part of being an engineer for me is figuring out how it all should work and fit together. Once that's done - I already basically have all of the code for the solution in my head - I've just got to get it out through my fingers and slog through all the little ways it isn't quite right, doesn't satisfy x or y best practice, needs to be reshaped to accommodate some legacy thing it has to integrate that is utterly uninteresting to me, etc.
In the old model, I'd enjoy the first few hours or days of working on something as I was designing it in my mind, figuring out how it was all going to work. Then would come the boring part. Toiling for days or weeks to actually get all the code just so and closing that long-tail gap from 90% done (and all interesting problems solved) to 100% done (and all frustrating minutia resolved).
AI has dramatically reduced the amount of time the unsatisfying latter part of a given effort lasts for me. As someone with high-functioning ADD, I'm able to stay in the "stimulation zone" of _thinking_ about the hard / enjoyable part of the problem and let AI do (50-70%, depending on domain / accuracy) of the "typing toil".
Really good prompts that specify _exactly_ what I want (in technical terms) are important and I still have to re-shape, clean up, correct things - but it's vastly different than it was before AI.
I'm seeing on the horizon an ability to materialize solutions as quickly as I can think / articulate - and that to me is very exciting.
I will say that I am ruthlessly pragmatic in my approach to development, focusing on the most direct solution to meet the need. For those that obsesses over beautiful, elegant code - personalizing their work as a reflection of their soul / identity or whatever, I can see how AI would suck all the joy from the process. Engineering vs. art, basically. AI art sucks and I expect that's as true for code as it is for anything else.
The things I'm usually tabbing through in cursor are not the things that make me feel a lot of enjoyment in your work. The things that are most enjoyable are usually the system level design aspects, the refactorings to make things work better. These you can brainstorm with AI, but cannot delegate to AI today.
The rest is glorified boilerplate that I find usually saps me of my energy, not gives me energy. I'm a fan of anything that can help me skip over that and get to the more enjoyable work.
I asked chatgpt mini something about godot, and often it gives erroneous answers.
So it causes developers to regularly fix what chatgpt is wrong about.
Not great.
I found myself recently making decent superficial progress only to introduce a bug and had a system crash (unusual bc it’s python) bc I didn’t really understand how the package worked (bc I bypassed the docs for the AI examples). It did end up working out ok - I then went into the weeds and realised the AI has given me two examples that worked in isolation but not together - inconsistent API calls essentially. I do like understanding what I’m doing as much or more than getting it done, bc it always comes back to you, sooner or later.
The post focuses on flow, but depending on what you mean by it, it isn't necessarily a good thing. Trying to solve something almost too difficult usually gets you out of flow. You still need concentration, though.
My main worry about AI is that people just keep using the garbage that exists instead of trying to produce something better, because AI takes away much of the pain of interacting with garbage. But most people are already perfectly fine using garbage, so probably not much will change here.
As a scientist, I actually greatly enjoy the AI assisted coding because it can help with the boring/tedious side of coding. I.e. I occasionally have some new ideas/algorithms to try, and previously I did not have enough time to explore them out, because there was just too much boring code to be written. Now this part is essentially solved, and I can more easily focus on key algorithms/new ideas.
Funny that I found this article going to hacker news as a pause in my work : I had to chose between using Aider or my brain to code a small algorithmic task, sorting items of a list based on dependences between items written in a YAML file.
Using Aider would probably solve the task in 5 minutes. Coding it in 30 minutes. The former choice would result in more time for other tasks or reading HN or having a hot beverage or walking in the sun. The second would challenge my rusting algorithmic skills and give me a better understanding of what I'm doing for the medium term.
Hard choice. In any case, I have a good salary, even with the latter option I can decide to spend good times.
I've tried getting different AIs to say something meaningful about code, never got anything of value back so far. They can't even manage tab-completion well enough to be worth the validation effort for me.
Yeah I wonder how do the code look after such professional AI development. I tried ChatGPT 1o to ask it about simple C function - what errors are there. It answered only after I directly asked about the aspects I was expecting it to tell about. It means that if I didn't know that the LLM wouldn't tell me...
I had a lot of joy making an experimental DSL with a web server runtime using primarily LLM tools.
Then I shared it on HN and was subject to literal harassment.
Typing isn't what makes programming fun.
AI coding preserves flow more than legacy coding. You never have to go read documentation for an hour. You can continuously code.
“ Fast forward to today, and that joy of coding is decreasing rapidly. Well, I’m a manager these days, so there’s that…”
This sounds a more likely reason for losing your joy if your passion is coding.
Have you encounter anything regarding tech debt when using AI?
Don't see any mention regarding this in the post, which is the common objection people have regarding vibe coding.
I’m the opposite. Tabbing through boilerplate increases my flow.
> Fast forward to today, and that joy of coding is decreasing rapidly. Well, I’m a manager these days, so there’s that… But even when I do get technical, I usually just open Cursor and prompt my way out of 90% of it. It’s way more productive, but more passive as well.
Dude's an engineering manager who codes maybe 5% of the time and his joy is decreasing. AI is not the problem, it's being an engineering manager.
I think a lot of this discussion is moot - it all devolves into the same arguments rehashed between people who like using AI and people who do not.
What we really need are more studies on the productivity and skill outcomes of using AI tools. Microsoft did one, with results that were very negative towards AI tools [1]. I would like to see more (and much larger cohort) studies along this line, whether they validate Microsoft's conclusions or oppose them.
Personally I do not find AI coding tools to be useful at all - but I have not put extensive time into developing a "skillset" to use them optimally. Mainly because I believe, similar to what the study by MS found, that they are detrimental to my critical reasoning skills. If this turns out to be wrong, I would not mind evaluating changing course on that decision - but we need more data.
1. https://www.microsoft.com/en-us/research/wp-content/uploads/...
i dont know where you are working, but where I work i cant prompt 90% of my job away using cursor. in fact, I find all of these tools to be more and more useless and our codebase is growing and becoming more complex
based on the current state of AI and the progress im witnessing on a month-by-month basis - my current prediction is there is zero chance AI agents are going to be coding and replacing me in the next few years. if i could short the startups claiming this, I would.
Don't get distracted by claims that AI agents "replace programmers". Those are pure hype.
I'm willing to bet that in a few years most of the developers you know will be using LLMs on a daily basis, and will be more productive because of it (having learned how to use it).
this is already the case.
I have the same experience. It‘s basically a better StackOverflow, but just like with SO you have to be very careful about the replies, and also just like SO its utility diminishes as you get more proficient.
As an example, just today I was trying to debug some weird WebSocket behaviour. None of the AI tools could help, not Cursor, not plain old ChatGPT with lots of prompting and careful phrasing of the problem. In fact every LLM I tried (Claude 3.7, GPT o4-mini-high, GPT 4.5) introduced errors into my debugging code.
I’m not saying it will stay this way, just that it’s been my experience.
I still love these tools though. It’s just that I really don’t trust the output, but as inspiration they are phenomenal. Most of the time I just use vanilla ChatGPT though; never had that much luck with Cursor.
No one was forcing you to use SO, in fact we made fun of people who did copy-paste/compile-coding.
Yeah, they're currently horrible at debugging -- there seems to be blind spots they just can't get past so end up running in circles.
A couple days ago I was looking for something to do so gave Claude a paper ("A parsing machine for PEGs") to ask it some questions and instead of answering me it spit out an almost complete implementation. Intrigued, I threw a couple more papers at it ("A Simple Graph-Based Intermediate Representation" && "A Text Pattern-Matching Tool based on Parsing Expression Grammars") where it fleshed out the implementation and, well... color me impressed.
Now, the struggle begins as the thing has to be debugged. With the help of both Claude and Deepseek we got it compiling and passing 2 out of 3 tests which is where they both got stuck. Round and round we go until I, the human who's supposed to be doing no work, figured out that Claude hard coded some values (instead of coding a general solution for all input) which they both missed. In applying ever more and more complicated solutions (to a well solved problem in compiler design) Claude finally broke all debugging output and I don't understand the algorithms enough to go in and debug it myself.
Of course I didn't use any sort of source code management so I could revert to a previous version before it was broken beyond all fixing...
Honestly, I don't even consider this a failure. I learned a lot more on what they are capable of and now know that you have to give them problems in smaller sections where they don't have to figure out the complexities of how a few different algorithms interact with each other. With this new knowledge in hand I started on what I originally intended to do before I got distracted with Claude's code solution to a simple question.
--edit--
Oh, the irony...
After typing this out and making an espresso I figured out the problem Claude and Deepseek couldn't see. So much for the "superior" intelligence.
One of the ways these tools are most useful for me is in extremely complex codebases.
This has become especially true for me in the past four months. The new long context reasoning models are shockingly good at digging through larger volumes of gnarly code. o3, o4-mini and Claude 3.7 Sonnet "thinking" all have 200,000 token context limits, and Gemini 2.5 Pro and Flash can do 1,000,000. As "reasoning" models they are much better suited to following the chain of a program to figure out the source of an obscure bug.
Makes me wonder how many of the people who continue to argue that LLMs can't help with large existing codebases are missing that you need to selectively copy the right chunks of that code into the model to get good results.
But 1 million tokens is like 50k lines of code or something. That's only medium sized. How does that help with large complex codebases?
What tools are you guys using? Are there none that can interactively probe the project in a way that a human would, e.g. use code intelligence to go-to-definition, find all references and so on?
This to me is like every complaint I read when people generate code and the LLM spits out an error, or something stupid. It's a tool. You still have to understand software construction, and how to hold the tool.
Our Rust fly-proxy tree is about 80k (cloc) lines of code; our Go flyd tree (a Go monorepo) is 300k. Generally, I'll prompt an LLM to deal with them in stages; a first pass, with some hints, on a general question like "find the code that does XYZ"; I'll review and read the code itself, then feed that back to the LLM with questions like "summarize all the functionality of this package and how it relates to other packages" or "trace the flow of an HTTP request through all the layers of this proxy".
Generally, I'll take the results of those queries and have them saved in .txt files that I can reference in future prompts.
I think sometimes developers are demanding something close to AGI from their tooling, something that would do exactly what they would do (only, in the span of about 15 seconds). I don't believe in AGI, and so I don't expect it from my tools; I just want them to do a better job of fielding arbitrary questions (or generating arbitrary code) than grep or eglot could.
Yeah, 50,000 lines sounds about right for 1m tokens.
If your codebase is larger than that there are a few tricks.
The first is to be selective about what you feed into the LLM: if you know the work you are doing is in a particular area of the codebase, just paste that bit in. The LLM can make reasonable guesses about things the code references that it can't see.
An increasingly effective trick is to arm a tool-using LLM with a tool like ripgrep (effectively the "interactively probe the project in a way that a human would" idea you suggested). Claude Code and OpenAI Codex both use this trick. The smarter models are really good at deciding what to search for and evaluating the results.
I've built tools that can run against Python code and extract just the class, function and method signatures and their docstrings - omitting the actual code. If you code is well designed and has reasonable documentation that could be enough for the LLM to understand it.
https://github.com/simonw/symbex is my CLI tool for that
https://simonwillison.net/2025/Apr/23/llm-fragment-symbex/ is a tool I released this morning that turns Symbex into a plugin for my LLM tool.
I use my https://llm.datasette.io/ tool a lot, especially with its new fragments feature: https://simonwillison.net/2025/Apr/7/long-context-llm/
This means I can feed in the exact code that the model needs in order to solve a problem. Here's a recent example:
From https://simonwillison.net/2025/Apr/20/llm-fragments-github/ - I'm populating the context with the exact examples needed to solve the problem.in the meantime im having lots of fun coding and using AI, reinventing every wheel i can. 0 stress cos i don't do it for money :). I think a lot of people are having a tantrum because programing is not sexy anymore, its getting easier, the bar is lower now , the quality is awful and nobody cares. its like any other boring soul crushing job.
also if you want to see the real cost (at least part of it) of AI coding or the whole fucked up IT industry, go to any mining town in the global south.
>"...the one thing that currently worries me most about using AI for software development: lack of joy."
I struggled with this at first too. But it just becomes another kind of joy. Think of it like jogging versus riding a motorcycle. Jogging is fun, people enjoy it, and they always will. But flying down a canyon road at 90MPH and racing through twists and turns is... way more fun. Once you've learned how to do it. But there's a gap there in which it stops being fun until you do.
That’s an interesting analogy but I do disagree with it.
I would say that programming without an AI is like riding a motorcycle. You’re in complete control and it’s down to your skill to get you we’re your going.
While using AI is like taking a train. You got to plan the route but you’re just along for the ride.
Which I think lines up to the article. If you want to get somewhere easily and fast, take a train. But that does take away the joy of the journey.
One of the things people often overlook don't talk about in this arguments is the manager's point of view and how it's contributing to the shakeups in this industry.
As a developer I'm bullish on coding agents and GenAI tools, because they can save you time and can augment your abilities. I've experienced it, and I've seen it enough already. I love them, and want to see them continue to be used.
I'm bearish on the idea that "vibe coding" can produce much of value, and people without any engineering background becoming wildly productive at building great software. I know I'm not alone. If you're a good problem solver who doesn't know how to code, this is your gateway. And you better learn what's happening with the code while you can to avoid creating a huge mess later on.
Developers argue about the quality of "vibe coded" stuff. There are good arguments on both sides. At some point I think we all agree that AI will be able generate high quality software faster than a human, someday. But today is not that day. Many will try to convince you that it is.
Within a few years we'll see massive problems from AI generated code, and it's for one simple reason:
Managers and other Bureaucrats do not care about the quality of the software.
Read it again if you have to. It's an uncomfortable idea, but it's true. They don't care about your flow. They don't care about how much you love to build quality things. They don't care if software is good or bad they care about closing tickets and creating features. Most of them don't care, and have never cared about the "craft".
If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together. A wall is a wall. That's how the majority of managers view software development today. By the time that shoddy wall crumbles they'll be at another company anyway so it's someone else's problem.
When I talk about the software industry collapsing now, and in a few years we're mired with garbage software everywhere, this is why. These people in "leadership" are salivating at the idea of finally getting something for nothing. Paying a few interns to "vibe code" piles of software while they high five each other and laugh.
It will crash. The bubble will pop.
Developers: Keep your skills sharp and weather out the storm. In a few years you'll be in high demand once again. When those walls crumble, they will need people who what they're doing to repair it. Ask for fair compensation to do so.
Even if I'm wrong about all of this I'm keeping my skills sharp. You should too.
This isn't meant to be anti-management, but it's based on what I've seen. Thanks for coming to my TED talk.
* And to the original point, In my experience the tools interrupt the "flow" but don't necessarily take the joy out of it. I cannot do suggestion/autocomplete because it breaks my flow. I love having a chat window with AI nearby when I get stuck or want to generate some boilerplate.
I think you are right with the building analogy. Most stuff built in the last 25 years is crap quality! But importantly looks nice. Needs to look nice at first.
> If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together.
IDK, there's still a place in society for master masons to work on 100+ year old buildings built by other master masons.
Same with the robots. They can implement solutions but I'm not sure I've heard of any inventing an algorithmic solution to a problem.
Earlier this year, a hackernews started quizzing me about the size and scope of the projects I worked on professionally, with the implication that I couldn't really be working on anything large or complex -- that I couldn't really be doing serious development, without using a full-fat IDE like IntelliJ. I wasn't going to dox myself or my professional work just so he could reach a conclusion he's already arrived at. The point is, to this person, beyond a certain complexity threshold -- simple command-line tools, say -- an IDE was a must, otherwise you were just leaving productivity on the table.
https://news.ycombinator.com/item?id=42511441
People are going to be making the same judgements about AI-assisted coding in the near future. Sure, you could code everything yourself for your own personal enrichment, or simply because it's fun. But that will be a pursuit for your own time. In the realm of business, it's a different story: you are either proompting, or you're effectively stealing money from your employer because you're making suboptimal use of the tools available. AI gets you to something working in production so much faster that you'd be remiss not to use it. After all, as Milt and Tim Bryce have shown, the hard work in business software is in requirements analysis and design; programming is just the last translation step.
The old joy may be gone. But the new joy is there, if you're receptive to it
And which joy is that? Short sighted profits?
Injecting malware via flaws in the shitty programs, maybe
So if I'm understanding this, there are two central arguments being made here.
1. AI Coding leads to a lack of flow.
2. A lack of flow leads to a lack of joy.
Personally, I can't find myself agreeing with the first argument. Flow happens for me when I use AI. It wouldn't surprise me if this differed developer to developer. Or maybe it is the size of requests I'm making, as mine tend to be on the smaller size where I already have an idea of what I want to write but think the AI can spit it out faster. I also don't really view myself as prompt engineering; instead it feels more like a natural back and forth with the AI to refine the output I'm looking for. There are times it gets stubborn and resistant to change but that is generally a sign that I might want to reconsider using AI for that particular task.
One trend I've been finding interesting over the past year is that a lot of engineers I know who moved into engineering management are writing code again - because LLMs mean they can get something productive done in a couple of hours where previously it would have taken them a full day.
Managers usually can't carve out a full day - but a couple of hours is manageable.
See also this quote from Gergely Orosz:
From https://x.com/GergelyOrosz/status/1914863335457034422> a lot of engineers I know who moved into engineering management are writing code again
They should be managing instead. Not to say that they can't code their own tools, but the statement sounds like a construction supervisor nailing studs or welding steel bars. Can work for a small team, but that's not your primary job.
Hard disagree.
I've been an engineering manager and it's a lot easier to make useful decisions that your team find credible if you can keep your toes in the water just a little bit.
My golden rule is to stay out of the critical path of shipping a user-facing feature: if a product misses a deadline because the engineering manager slipped on their coding commitments, that's bad.
The trick is to use your minimal coding time for things that are outside of that critical path: internal tools, prototypes, helping review code to get people unstuck, that kind of thing.
This is also true of (technical) product managers from an engineering background.
It's been amazing to spin up quick React prototypes during a lunch break of concepts and ideas for quick feedback and reactions.
Yeah I think flow is more about holding a lot of knowledge about the code and its control flow in your head at a time. I think there's an XKCD or something that illustrates that.
You still need to do that if you're using AI, otherwise how do you know if it's actually done a good job? Or are people really just vibe coding without even reading the code at all? That seems... unlikely to work.
[dead]
The catch is that when AI handles 95% or 99% of a task, people say great, don't need humans. 99% is great.
But when that last 1% breaks and AI can’t fix it. That’s where you need the humans.
By then the price will have increased quite a bit; if you want me to fix your AI crap, you're going to pay until it hurts.