Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Vibe coding is mad depressing (gmnz.xyz)
238 points by dirtylowprofile 19 hours ago | hide | past | favorite | 137 comments




This article is not about vibe coding per se, it's about not having strong boundaries between you as the developer, and your client. You should not be allowing the client to dictate how you work, much less them having the permissions to merge in code. This was true before AI too, where clients might say, do X this way, and you should simply say no, because they are paying for your expertise*. It's like hiring a plumber then trying to tell them how to fix the toilet.

*as an aside, this reminds me of the classic joke where the client asks for the price list for a developer's services:

I do it: $500

I do it, but you watch: $750

I do it, and you help: $1,000

You do it yourself: $5,000

You start it, and you want me to finish it: $10,000


Here is the full price list :)

https://files.catbox.moe/1d87t7.jpg


It seems to be missing “You design, we will provide guaranteed counselling”.

Us developers are experiencing what designers have had to deal with for decades.

2009 anyone? https://theoatmeal.com/comics/design_hell


Devs too, this one is at least 30 years old: https://www.pcuf.fi/~pjt/pink/software-architecture.html

It's also taking over 30 years to open, any chance of a mirror? :)


My conclusion from the comic is that the client is happy. Clients like this don’t really want their designers expertise.

Product ppl who are devs, or devs who can product have already enjoyed this.

For example, "Can't we just add a button that does this?"


Back when I saw doing freelance work, the worst type of client was the one who was semi-technical, meaning they were technical enough to write code that they wanted to contribute to the project or to have strong architectural opinions, but not technical enough to understand the nuances and the implications of their suggestions.

I guess that, with vibe coding, it is very easy for every client to become like this.


> [...] they wanted to contribute to the project or to have strong architectural opinions

Also the worst kind of tech line-manager - typically promoted from individual contributors , but still want to argue about architecture, having arrived at their strong opinion within the 7 minutes they perused the design document between meetings.

If you're such a manager, you need to stop, if you're working with one, change teams or change jobs - you cannot win.


> but not technical enough to understand the nuances and the implications of their suggestions.

That isn't unique to "clients." It's human nature. Human's don't know what they don't know.

See: various exploits since computers were a thing.


This is also a story about marketing.

At this point, the level of puffery is on par with claiming a new pair of shoes will turn you into an Olympic athlete.

People are doing this because they’re told it works, and showing up to run a marathon with zero training because they were told the shoes are enough.

Some people may need to figure out the problem here for themselves.


The classic version is not a developer, it's a mechanic or some other blue collar job. I've seen it on a sign in a machine shop 20 years ago.

This is exactly it. Some clients end up turning everything into a messy room and messy desk, decide to get help not to do it, see a clean space to create, and then start making a mess all over again.

Asking such clients why are we here? What have previous attempts (becuase they have been done) provided and not provided, and why do you think they did or didn't have long term viability so we didn't need to talk.

This is less about coding and helping people learn how to think about where and how things cna fit in.

It's great to go fast with vibe coding, especially if you like disposable code that you can iterate with. In the hands of an a developer they might be able to try more things or get more done in some way, but maybe not all the ways especially if the client isn't clear.

The ability of the client ot explain what they want well with good external signals and how well they know how to ask will often be a huge indicator long before they try to pull you into their web of creating spider diagrams like the spiders who have taken something.


Video/audio production here and the exact same rules apply. You can’t let clients dictate your tools any more than you feel you should tell your plumber what they can use to fix your sink.

“We use Premiere.” Cool. I use Resolve. If we aren’t collaborating on the edit then this is an irrelevant conversation. You want a final product, that’s what you hired me for my dude. If you want me to slot into your existing editing pipeline that’s a totally different discussion.

“Last guy shot on a Red.” Cool. Hire them. Oh right you hired me this time. Interesting! Should we unpack that?

Freelancers: Stand your ground! Stand by your work! Tell clients to trust you!


Not all freelancers are good, sometimes you just ask these questions to understand how good the freelancer is.

No one said all freelancers are good, but either way none of the questions I gave as examples are indicators of my skill as a shooter/editor. The final product is all that matters. Whether I used Premiere, Resolve, FCPX, etc. to get there is immaterial.

If you pay me for a 30s highlight, you get a 30s highlight. If you don’t like the highlight itself that’s a different discussion.


>It's like hiring a plumber then trying to tell them how to fix the toilet.

And continuing to shit and piss and puke in the toilet while they are trying to fix it.


A disgusting analogy, but hilariously accurate.

https://news.ycombinator.com/item?id=30359560

DonHopkins on Feb 16, 2022 | prev | next [–]

When I implemented the pixelation censorship effect in The Sims 1, I actually injected some random noise every frame, so it made the pixels shimmer, even when time was paused. That helped make it less obvious that it wasn't actually censoring penises, boobs, vaginas, and assholes, because the Sims were actually more like smooth Barbie dolls or GI-Joes with no actual naughty bits to censor, and the players knowing that would have embarrassed the poor Sims.

[...]

The other nasty bug involving pixelization that we did manage to fix before shipping, but that I unfortunately didn't save any video of, involved the maid NPC, who was originally programmed by a really brilliant summer intern, but had a few quirks:

A Sim would need to go potty, and walk into the bathroom, pixelate their body, and sit down on the toilet, then proceed to have a nice leisurely bowel movement in their trousers. In the process, the toilet would suddenly become dirty and clogged, which attracted the maid into the bathroom (this was before "privacy" was implemented).

She would then stroll over to toilet, whip out a plunger from "hammerspace" [1], and thrust it into the toilet between the pooping Sim's legs, and proceed to move it up and down vigorously by its wooden handle. The "Unnecessary Censorship" [2] strongly implied that the maid was performing a manual act of digital sex work. That little bug required quite a lot of SimAntics [3] programming to fix!

[1] Hammerspace: https://tvtropes.org/pmwiki/pmwiki.php/Main/Hammerspace

[2] Unnecessary Censorship: https://www.youtube.com/watch?v=D-ayfR3UtcY [broken link updated]

[3] SimAntics: https://news.ycombinator.com/item?id=22987435 and https://simstek.fandom.com/wiki/SimAntics


pretty standard plumber prices

> It's like hiring a plumber then trying to tell them how to fix the toilet.

I never faced or witnessed that in software dev.


I once had to give my non-technical boss Frontpage and told him to make the fucking login page himself because nothing I did matched his exact wishes.

The result was a piece of shit, but it was his piece of shit and he loved it.


This is an awesome comment.

History doesn’t repeat itself, but it definitely rhymes – I can’t wait for the modern versions of this.


There are multiple stories of public facing applications with Microsoft active directory as the source of user accounts that speak to the worst examples of this.

I was involved in such an attempt but it never got off the ground.


If you haven't faced or witnessed it, it is a moment that you will not forget, and then over time little by little, realize it might not be so uncommon.

We don't have to seek it out, it finds us.


Us testers have been dealing with that crap forever. Every non-tester thinks they know how a professional tester should work, and imagines our work is just writing test cases.

By non testers, do you mean developers/UI designers and such ?

For me vibecoding has a similar feeling to a big bag of Doritos. It's really fun at first to slap down 10k lines of code in an afternoon knowing this is just an indulgence. I think AI is actually really useful for getting a quick view of some library or feature. Also, you can learn a lot if you approach it the right way. However, every time I do any amount of vibecoding eventually it just transitions into pure lethargy mode; (apparently lethargia is not a word, by the way). Once you eat half a bag of Doritos, are you really not going to eat the second half... do you really want to eat the second half? I don't feel like I'm benefitting as a human just being a QA tester for the AI, constantly shouting that X thing didn't work and Y thing needs to be changed slightly. I think pure vibecode AI use has a difficult to understand efficiency curve, where it's obviously very efficient in the beginning, but over time hard things start to compound such that if you didn't actually form a good understanding of the project, you won't be able to make progress after a while. At that point you ate the whole bag of Doritos, you feel like shit, and you can't get off the couch.

This. First I try it just a little to do a boring part. It feels great. The boring part that was holding me is gone and all it took was a little instruction. The dopamine hit is real. So of course I will try it again. But not so fast. It needs to be corrected to make everything aligned with the architecture. And as my requests get bigger, it needs more and more corrections. Eventually correcting everything becomes too tedious, and accepting is just too easy, and so I lower my standards, and soon enough lose track of all the decisions. The branch is now useless as I don't want to debug or own this code I no longer understand hence I start over. I want work to felt like a training session where you get fairly rewarded for your efforts with better understanding, not like a slot machine where you passively hope it gets it right next time.

"My requests get bigger" is the issue here. You're not talking to a real human with common sense or near-infinite working memory.

It's a language mode with finite context and the ability to use tools. Whatever it can fit into its context, it can usually do pretty well. But it does require guidance and documentation.

Just like working with actual humans that aren't you and don't share your brain:

1) spec your big feature, maybe use an LLM in "plan" mode. Write the plan into a markdown file.

2) split the plan into smaller independent parts, in github issues or beads or whetever

3) have the LLM implement each part in isolation, add automatic tests, commit, reset context

Repeat step 3 until feature is done.

If you just use one long-ass chat and argue with the LLM about architecture decisions in between code changes, it WILL get confused and produce the worst crap you've ever seen.


Great analogy. Instead of eating the whole bag of doritos in one sitting, do it in phases. So instead of being just a QA tester, you get to pause, reflect and try to make sure you and the AI are on the same page.

> try to make sure you and the AI are on the same page.

What good is AI as a tool if it can get not on the same page as you

Imagine negotiating with a hammer to get it to drive nails properly

These things suck as tools


I have yet to find the niche where it is "good at the beginning". So far I've mostly tried asking to build C tools that use advanced linux API.

Me: hey make this, detailed-spec.txt

AI: okidoki (barfs 9k lines in 15 minutes) all done and tested!

Me looks at the code, that has feature-sounding names, but all features are stubs, all tests are stubs, and it does not compile.

Me: it does not compile.

AI: Yes, but the code is correct. Now that the project is done, which of these features you want me to add (some crazy list)

Me: Please get it to compile.

AI: You are absolutely right! This is an excellent idea! (proceeds to stub and delete most of what it barfed). I feel really satisfied with the progress! It was a real challenge! The code you gave me was very poorly written!

... and so on.


I'm not sure what you're using. I've used Claude in agent mode to port a very complex and spaghetti coded C application to nicely structured C++. The original code was so intertwined that I didn't want to figure out so I had shelved the project until AI came along.

It wasn't super bad at converting the code but even it struggled with some of the logic. Luckily, I had it design a test suite to compare the outputs of the old application and the new one. When it couldn't figure out why it was getting different results, it would start generating hex dumps comparisons, writing small python programs, and analyzing the results to figure out where it had gone wrong. It slowly iterated on each difference until it had resolved them. Building the code, running the test suite, comparing the results, changing the code, repeat. Some of the issues are likely bugs in the original code (that it fixed) but since I was going for byte-for-byte perfection it had to re-introduce them.

The issues you describe I have seen but not with the right technology and not in a while.


At the high level, you asked LLM to translate N lines of code to maybe 2N lines of code, while GP asked LLM to translate N lines of English to possibly 10N lines of code. Very different scenarios.

Are you sure claude didn't do exactly the same thing but the harness, claude code, just hid it from you?

I have seen AI agents fall into the exact loop that GP discussed and needed manual intervention to fall out of.

Also blindly having the AI migrate code from "spaghetti C" to "structured C++" sounds more like a recipe for "spaghetti C" to "fettuccine C++".

Sometimes its hidden data structures and algorithms you want to formalize when doing a large scale refactor and I have found that AIs are definitely able to identify that but it's definitely not their default behaviour and they fall out of that behaviour pretty quickly if not constantly reminded to do so.


Sounds like the debug mode that Cursor just announced.

In my case that was Claude Code with Opus.

I don't ever look at LLM-generated code that either doesn't compile or doesn't pass existing tests. IMHO any proper setup should involve these checks, with the LLM either fixing itself or giving up.

If you have a CLI, you can even script this yourself, if you don't trust your tool to actually try to compile and run tests on its own.

It's a bit like a PR on github from someone I do not know: I'm not going to actually look at it until it passes the CI.


Holy shit, I feel the same. I was arguing with an LLM one day about how to do Kerberos auth on incoming HTTP requests. It kept giving me bogus advice that I could disprove with a tiny snip of code. I would explain. It would react just like yours. After a few rounds, it would give the first answer again. Awful. So infuriating.

I had a similar issue with GNU plot. The LLM-suggested scripts frequently had syntax errors. I say: LLMs are awesome when they work, else they are a time suck / net negative.


Sometimes they just get into "argument simulator mode". There's a lot of training data of people online having stupid arguments.

You can write any program you want, as long as it is flappy bird in reactjs.


Willing to name “an LLM”?

Was this a local model?


Include in the prompt a verifiable testable exit criteria (compiling) and use agentic AI like cursor or codex with this, you’d be surprised what happens :)

Is claude code with both Sonnet and Opus agentic enough? Because it is constantly finding creative ways to ignore direct, repeated instructions ("user asked X but it is hard, let's do Y instead"), implement fake tests ("feature X is complex. we need to test it completely. let's write script that will create files that feature X would have created, then test that files exist"), sabotage and delete working code ("we need to track FD of the open file (runs strace). The FD is 5 (hardcodes 5 in the code instead of implementing anything useful) tests pass now!")

I have not experienced the level of malice and sweet-talking work avoidance from anyone. It apologizes like an alcoholic, then proceeds doubling down.

Can you force it to produce actually useful code? Yes, by repeatedly yelling at it to please follow the instructions. In the process, it will break, delete, or implement hard to find bugs in rest of the codebase.

I'm really curious, if anyone actually has this thing working, or they simply haven't bothered to read the generated code


You need to use the features that Claude Code gives you in order to be successful with it. Your build and tests should be in a Stop hook that prevent Claude from stopping if the build or tests fail. Combining this with a Stop hook that bails out if the first hook failed n times already prevents infinite loops.

With anything above a toy project, you need to be really good with context window management. Usually this means using subagents and scoping prompts correctly by placing the CLAUDE.md files next to the relevant code. Your main conversation's context window usage should pretty much never be above 50%. Use the /clear command between unrelated tasks. Consider if recurring sequences of tool calls could be unified into a single skill.

Instead of sending instructions to the agent straight away, try planning with it and prompting it to ask your questions about your plan. The planning phase is a good place to give Claude more space to think with "think > think hard > ultrathink". If you are still struggling with the agent not complying, try adding emplasis with "YOU MUST" or "IMPORTANT".


> I have yet to find the niche where it is "good at the beginning".

The niche is "the same boring CRUD web app someone made in 2003 but with Tailwind CSS".


Good work, if you can get it.

It's terrible at the niches I actually have expertise in, which are in mathematics. I'd guess an expert is going to find the flaws in anything it's doing in their field. That being said, if you're just trying to e.g. see what some GUI library can do then it's pretty useful to get something going. I personally would prefer not using it in anything that's not very much a throwaway test project though, but that is my luxury as a jobless bum.

But doesn't your argument actually mean it is terrible at absolutely everything in a very subtle, convincing way, so that it takes an actual expert in the field to tell that the generated text is not a profound revelation but a bag of nonsense?

Meaning, is the answer in the field I'm not an expert of good, or am I simply being fooled by emoji and nice grammar?


I don't think it's expert, I just don't think being expert is necessary to get some value out of it if you aren't an expert. The trap is letting the charade go on longer than it should though. I personally only see the main value in using it to create test projects or to get the gist of what a library can do. I do think that's pretty valuable, and I also think real expertise is more valuable.

Or you can do like some of the others suggest and eliminate pure vibecoding. Just use it as a back and forth where you understand along the way and make well-reasoned changes. That looks a lot more like real engineering, so it's not surprising the other commenters report better results.


Gell-Mann amnesia, but for LLMs.

It's an interesting concept, but inapplicable here because I don't trust the media reporting on LLMs and I personally believe expert programmers are never going to be replaced. My concept of the value of LLMs is that they are good for generating throwaway test code to assess the use of a library or to prototype a feature.

This is exactly what happened with my experience of vibe coding, you don't understand the code after a while and pushing the project from the 80 percent mark to the 100 percent mark is exponentially more difficult, and that's where the AI fails and you have to take over. Only, you don't know anything about the code and you give up.

I had to rewrite several vibe coded projects from scratch due to this effect. It's useful as a prototyping tool but not a complete productionizing tool.


I have had similar experiences, and wonder how the subjective experience is impacting my estimations of progress and productivity.

Specifically: what if I just started downloading repo’s and aggressively copying and pasting to my needs… I’d get a whole bunch of code kinda quick, it’d mostly work.

It feels less interactive, but shares a high level similarity in output and understanding.


I've had it stuck in my head for months now that "LLMs are Legacy Code as a Service". A lot of what they ~~plagiarize~~ produce is based on other people's legacy code. A lot of vibe coding is producing "Day 0 Legacy Code" that is hard to debug/maintain in a lot of the exact same ways Legacy Code always is. (It was written by a developer who is not currently around. It's probably poorly commented/documented in the hows/whys rather than the whats. If it was fast tracked into production somewhere it is probably already in a "not broke, don't fix it" state where the bugs are as much a part of the expected behavior as the features.)

As a developer that has spent far too much of my career maintaining or upgrading companies' legacy code, my biggest fear with the LLM mania is not that my skills go away, but become in so much higher demand in an uncomfortable way because the turn around time between launch and legacy code becomes much shorter and the management that understands why it is "legacy code"/"tech debt" shrinks because it is neither old or in obviously dead technologies. "Can you fix this legacy application? It was launched two days ago and nobody knows what it does. Management says they need it fixed yesterday, but there's no budget for this. Good luck."


Lines of code used to be a moat for a company, it no longer is.

Being effective with the code to get the same things done is. That requires a new kind of driving for a new kind of vehicle.


Whay do you mean? I love Doritos!

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.

For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.

As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.

I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.


We’re in the geocities phase of LLM, mostly trash, very basic, but eventually, people will either get bored and go back to whatever it is they were doing or actually use the tools for useful and productive work.

As for the feelings that using LLM has when it one shots your project start (and does a pretty good job), have a German word:

Automatisierungskummer

(automation sorrow) • Kummer is emotional heaviness, a mild-to-deep sadness.


Some remember the Geocities era as one of the best phases of the internet.

Its hard to know what things will look like in 20 years but people may miss the time when AI cost nothing, or very little, and was less fettered. I think probably not- it would be like being nostalgic for really low-res, low frame youtube videos, but nostalgia is pretty unpredictable and some people love those old FMV games.


> Some remember the Geocities era as one of the best phases of the internet.

I remember the feeling of realizing that I had terrible taste just like everyone else and I was putting huge amounts of effort into trying to do seamless tiling background images that still looked awful and distracting and ruined the contrast. And also the feeling of having no idea what to talk about or why anyone would care.

Now I have way too much to talk about — so much that I struggle to pick something and actually start writing — and I'm still not sure why anyone would care. But at least I've learned to appreciate plain, solid-colour backgrounds.


"but people may miss the time when AI cost nothing" - That's been on my mind a lot... it's like I feel like I have to use it more or I'll regret it! I am not looking forward to the AI talking about NordVPN injected into the session.

Just use another AI to remove it!

This is not a German word. Pseudo-German at best.

Put it into Google and you will see.


Yes, this page is the only match..

All the consulting practice arguments aside, this is fundamentally a gatekeeping argument about clients staying in their lane. I'm sure doctors feel the same way about patients with weirdly specific questions about HFpEF diagnoses. Doctors have always hated "Doctor Google", and now they have to contend with "Doctor GPT". It's up to you how much sympathy to have for them.

Not related to other types of clients, but for doctors and patients specifically, I have heard stories where doctors dismissed patients' concerns until the patients themselves googled and found out exactly what issue they had and then the doctors were much more amenable to solving it [0].

Indeed, [1]

> researchers found that searching symptoms online modestly boosted patients’ ability to accurately diagnose health issues without increasing their anxiety or misleading them to seek care inappropriately [...] the results of this survey study challenge the common belief among clinicians and policy-makers that using the Internet to search for health information is harmful.

[0] https://www.cbc.ca/radio/whitecoat/man-googles-rash-discover...

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC8084564/


It's not at all difficult for a scientifically literate person to be more up to date on the literature of something they have, or could have, than even a specialist in that broad area. There's too many disorders and not enough time.

I have something that about a quarter percent of individuals have in the US. A young specialist would know how to treat based on guidelines but beyond that there's little benefit in keeping up to date with the latest research unless it's a special interest for them (unlikely).

Good physicans are willing to read what their patients send them and adjust the care accordingly. Prevention in particular is problematic in the US. Informed patients will have better outcomes.


My PCP and I have a really good rapport, and so when I stated having weird confusing health problems they were quite happy to hear what I was finding in PubMed and then share their thoughts on it, and together we figured things out and got my situation handled. I thought it was a nicely complementary situation: they didn’t have the time to do a literature dive, and I didn’t have the expertise to fully understand what I was reading.

But I bet what happens more often is patients showing up with random unsubstantiated crap they found on Reddit or a content farm, and I can understand health care providers getting worn down by that sort of thing. I have a family member who believed he had Morgellon’s Disease, and talking to him about it was exhausting.


> Morgellons (/mɔːrˈɡɛlənz/) is the informal name of a self-diagnosed, scientifically unsubstantiated skin condition in which individuals have sores that they believe contain fibrous material.[1][2] Morgellons is not well understood, but the general medical consensus is that it is a form of delusional parasitosis,[3] on the psychiatric spectrum.[4]

Your family member... mistakenly believed that he had a psychiatric condition involving a mistaken belief?


No, he believed he had Technicolor skin parasites. Suggesting it might be a psychiatric condition was a good way to start a nasty fight.

I’ve observed people in this “community” from a distance.

Does your family member have the sores?


I think this is similar in other fields, and appears to be related to your self-esteem. Some junior (and sometimes even senior) developers may have hard time accepting improvements on their design and code. If you are identified with your code, you may be unwilling to listen to suggestions from others. If you are confident, you will happily accept and consider suggestions from others and are able to admit that anything can be improved, and others can give you valuable insight.

Similarly, it appears that some doctors are willing to accept that they have limited amount of time to learn about specific topics, and that a research-oriented and intelligent patient very interested in few topics can easily know more about it. In such a case a conducive mutual learning experience may happen.

One doctor told me that what he is offering is statistical advice, because some diseases may be very rare and so it makes sense to rule out more common diseases first.

Other doctors may become defensive, if they have the idea that the doctor has the authority and patients should just accept that.


which includes me / my parents, though we found a thankfully excellent primary care doctor while I was growing up who took the new information in stride and chased promising paths we managed to find. we learned a lot from each other in the process.

doctors don't generally have the time or inclination to spend unpaid time doing specialized research for one of their many patients. competent layman efforts are generally huge wastes of time compared to asking a specialist, but in the absence of a specialist they can still be extremely useful, and specialists don't know everything either. plus there aren't always specialists, whether affordable/accessible or sometimes existent at all.


I don't think the analogy holds up at all. A doctor usually has a very small time window to deal with your problem and then switches to the next patient.

If I'm working on your project I'm usually dedicated to it 8 hours a day for months.

I do agree this is not new, I had clients with some development experience come up with off the cuff suggestions that just waste everyone's time and are really disrespectful (like how bad at my job do you think I am if you think I didn't try the obvious approach you came up with while listening to the problem). But AI is going to make this much worse.


As someone who does consulting, it's more about the attitude than the tool itself. Clients trying to understand the problem by themselves with whatever tools they can use are generally well-disposed and easy to work with. Those who email you stuff like "Why don't you have chatgpt do this???" as if it's a revolutionary thought are mostly a PITA. I assume doctors feel largely the same.

I feel like my consulting bona fides are also pretty strong, and while I get how annoying this must feel, it's hard for me personally to be irritated either at clients or at frontier models for enabling clients to do this.

To me it's more like the board, in some small way, being shaken up, and what I mostly see is an opportunity for consultancies to excel at interfacing with clients who come to them with LLM code and LLM-generated ideas.


Sure, that's a great point. If the LLM code/ideas they come with are actually valuable, they tend to fall into the first bucket though.

I'm not saying we need to dismiss people for using LLMs at all, for better or for worse we live in a world where LLMs are here to stay. The annoying people would have found a way to be annoying even without AI, I'm sure.


There is a big difference between a client that thinks for themself, researches and challenges a professionals assesment or a client that wants to dictate or participate in the implementation process. In case of medical services we would talk about a patient that wants to do the operation...

I think you hit the nail on the head with the analogy to Doctor GPT, but I think you missed it with gatekeeping. I don't think it's about gatekeeping at all.

A freelance developer (or a doctor) is familiar with working within a particular framework and process flow. For any new feature, you start by generating user stories, work out a high level architecure, think about about how to integrate that into your existing codebase, and then write the code. It's mostly a unidirectional flow.

When the client starts giving you code, it turns into a bidirectional flow. You can't just copy/paste the code and call it done. You have to go in the reverse direction: read the code to parse out what the high level architecture is, which user stories it implements and which it does not. After that you have to go back in the forward direction to actually adapt and integrate the code. The client thinks they've made the developer's job easier, but they've actually doubled the cognitive load. This is stressful and frustrating for the developer.


> This is stressful and frustrating for the developer.

Charge more and/or set expectations up front.


My doctor actually appreciates that I go to primary and other reliable sources, read up on my conditions, and understand the standard of care and other appropriate courses of action. What he can't stand is people who "do their own research" on, like, InfoWars.

I treat my doctor as a subject matter expert/collaborator, which means that if I come to him with (for example) "what if it's lupus?" and he says "it's probably not lupus", I usually let the matter drop.


> There is no best practices anymore, no proper process, no meaningful back and forth.

Reality check: none of that ever existed, unless either the client mandated it (as a way to tightly regulate output quality from cheaper developers) or the developer mandated it (justifying their much higher prices and value to the customer).

Other than that: average customer buying code from average developer means:

- git was never even considered

- if git was ever used, everything is merged into "master" in huge commits

- no scheduled reviews, they only saw each other when it's time for the next quarterly/monthly payment and the client was shown (but not able to use) some preview of what's done so far


I think building apps and websites for other people is mad depressing. It went from "move this up there, and change that colour to pink" to a client ruining a beautiful site by using a nocode tool. Now they have superpowers to ruin it by adding AI generated code as well. AI can generate absolutely beautiful code if it is generated on the right architecture with the right patterns and rules. The problem isn't the AI it's the people telling AI and developers what to do.

I am freelancer as well and in the last month I got two new clients who asked me to fix the vibe-coded projects.

And I am now thinking to specialize in the field: they already know how f*d they are and they are going to pay a lot (or: they have no other opportunity). Something what looked like million-dollar idea created for pennies 3 months later is unbearable, already rotting pale of insanity which no junior human developer or even AI code assistant is able to extend. But they already have investors or clients who use it.

And for me, with >20 years of coding experience, this is a lot of fun cleaning it to the state when it is manageable.


The author should really rethink the relations with clients and "freedom" they get in the process.

Back when I did websites for clients, often after carefully thinking a project through and getting to some final idea on how everything should look, feel, and operate, I presented this optimal concept to clients. Some would start recommending changes and adding their own ideas—which I most often already iterated through earlier during ideation and designing.

It rarely builds a good rapport with clients if you start explaining why their ideas on "improvements" are really not that good. Anyway, I would listen to them, nod, and do nothing as to their ideas. I would just stick to mine concept without wasting time for random client's "improvements"—leaving them to the last moment if a client would insist on them at the very end.

Funny thing is that clients usually, after more consideration and time would come on their own to the result I came to and presented to them—they just needed time to understand that their "improvements" aren't relevant.

Nevertheless, if they insisted on implementing their "improvements" (which almost never happened) I'd do it for additional price—most often for them to just see that it wasn't good idea to start with and get back to what I already did before.

So, sometimes, ignoring client's ideas really saves a lot of time.


The IKEA Effect [1] is a hell of a drug.

[1] https://en.wikipedia.org/wiki/IKEA_effect


Yeah, its bad out there. At my company, we have a team of security professionals that focus on keeping our systems (and others') secure. AI for them has gone from "using it for scripting together nmap" to "we really need the platform your team is working on to do X, Y, and Z, so we vibed up this PR". On the engineering side, I don't have the political power to tell them no, because we don't really have senior leadership and we're behind schedule on everything. Why? Well, I spent two hours today resolving dozens of vulnerabilities our code scanners found in some vibed security team PR. The scanners that they set up, and demanded we use. Half the stuff they vibe we literally have to feature flag off immediately after release, because they didn't QA it, but they rarely revisit the feature because to them its always either "on to the next big idea" or, more often, "we're just security, platform isn't our responsibility".

The thing is: I know you might read that and think I'm anti-AI. In this specific situation, at my company: We gave nuclear technology to a bunch of teenagers, then act surprised when they blow up the garage. This is a political/leadership problem; because everything, nine times out of ten, is a political/leadership problem. But the incentives just aren't there yet for generalized understanding of the responsibility it requires to leverage these tools in a product environment that's expected to last years-to-decades. I think it will get there, but along that road will be gallons of blood from products killed, ironically, by their inability to be dynamic and reliable under the weight of the additive-biased purple-tailwind-drenched world of LLM vibeput. But, there's probably an end to that road, and I hope when we get there I can still have an LLM, because its pretty nice to be able to be like "heyo, i copy pasted this JSON but it has javascript single quotes instead of double quotes so its not technically JSON, can you fix that thanks"


AI is trash.

The people who think FizzBuzz is a leetcode programmer question are now vibecoding the same trash as always, except now they think they are smart x10 developers for forcing you to review and clean up their trash.


I feel like it allows me do more of the fun bits of coding and creating. It's not too different than giving the easy/basic/annoying stuff to consultants and less senior engineers. Do people get mad when the hire more devs? You still get to machinate over how to attack a problem in clever ways. Also, you can give 4 out of 5 tasks to the AI and leave the fun bits for yourself.

> Hey! I asked AI for this code, do you think this will work? I think you should use it.

unfortunately this problem preceeds AI, and has been worsened by it.

i've seen instances of one-file, in-memory hashmap proof-of-concept implementations been requested to be integrated in semi-large evolving codebases with "it took me 1 day to build this, how long will it take to integrate" questions


For the longest of times contract (and perm) developers/project managers/agencies have taken a lot of liberties of time and money only to develop sub standard products and then charge more for change requests and bug fixes. The model was long due to be disrupted. This new way of vibe coding is not perfect yet but produces results and thats what the sponsors are looking at as a return on investment. As technologists we have to play a big role to find that right balance and educate everyone, not just the business folks about what could go wrong and where are the areas where it might be actually used.

Just fire your customer. You didn't know you could do that? When you freelance, you absolutely can.

> The first clues started when a client, who I thought was a software developer, starts merging his own code through the main branch, without warning. No pull request, just straight git push --force origin main ... Last time, I checked this Xcode project did not compiled. Or anything close to it.

This doesn't read like a vibe-coding problem, and more of a client boundaries problem. Surely you could point out they are paying you for your expertise, and to supersede your best practices with whatever AI churns out is making the job they are paying you to do even harder, and frankly a little disrespectful ("I know better").


I run a low code platform for building internal tools & software. One of my prospect about to sign a contract came back telling me that his CTO has asked him to check vibe code tools and build a few internal tools with them. They are a large series D/E company and have over 250 internal tools built on retool (a service that they are migrating from). CTO is puzzled & is thinking if does he even need a platform to build & manage internal tools.

On other hand -- another customer of mine built a few internal tools with vibe code (& yes he does have subscription to my low code service) but then when newer requests came for upgrade thats where his vibe coded app started acting up. His candid feedback was -- for internal tools vibe code doesnt work.

As a service provider for low code --> we are now providing full fledged vibe code tooling on top. While I dont know how customers who do not wish to code and just have the software will be able to upkeep these softwares without needing professionals.


I recently ported c-rrb to c#, and when the first port was done Unused ai to help me refine the code. It was a pleasant experience, apart from the AI every three or four prompts introduced subtle bugs. In the end, Claude and I managed to speed up the code by almost 2x.

The worst was pushing the tail into the tree. My original code was pretty slow, but every time AI changed more than 4 lines it introduced subtle bugs.

I did not actually think ai would be that useful.


> Okay, so this non-technical person is sending me codes now.

I started wondering if this person was actually a developer here. Maybe just a typo, or maybe a dialect thing, but does anyone actually use "codes" as a plural?


Always assume the author is ESL in these situations.

In other disciplines, yes. Very common to hear it in mechanical or aerospace engineering, for example. They'll say "codes" to refer to multiple programs or "a code" to refer to a single program. It's amusing, when I was in the field I just went with it.

It’s not just that, there are other things in the article pointing to the person being a non-native English speaker. Which is fine, I’m one too.

It’s somehow ironic though that his written output could’ve been improved by running it through an AI tool.


I'd much rather read someone's actual errored, nonnative writing than whatever an LLM would produce from it. Not only because it's annoying reading the same fake style over and over, but also because the less fluent they are the less able they are to tell when the LLM output is changing things in ways that don't reflect what they're thinking.

And if the main complaint is just a few odd words or structures, it's really not that big of a deal to me.


> his written output could’ve been improved by running it through an AI tool.

I mean, it could've been homogonized by running it through an AI tool. I don't think there's a guarantee that it would've been an improvement. Yes, it probably could've helped refine away phrases that give away a non-native English speaker, but it also would've sanded down and ground away other aspects of the personality of the author. Is that an improvement? I'm not so sure.


similar experience - i freelanced recently (embedded systems) where i was to interface to a "software engineer" doing the backend.

Every. single. time. we hit an interface problem he would say “if you don’t understand the error feel free to use ChatGPT”. Dude it’s bare metal embedded software I WROTE the error. Also, telling someone that was hired because of their expertise to chatgpt something is crazy insulting.

We are in an era of empowered idiots. People truly feel that access to this near infinite knowledge base means it is an extension of their capabilities.


I feel that agent coding is actually giving a second wind of life to solid principles, “proper” software architecture. Now you can nag the llm to follow them and A- it will actually apply them if well directed and does not mind the (small?) extra complexity upfront B- you pretty much immediately see the effects

This sounds a lot more like a typical freelance client horror story than a problem with vibe coding.

I was almost expecting to hear that it made the job too easy. This kind of work is perfect for vibe coding. But you should be the one doing it.


Been writing software for like 20 years now and I love it. I am also a fan of AI-assisted coding, but I only just started using Cursor. Gosh I do not like it at all for a simple reason: since I didn't write the code, in order to understand it I have to read it. But gaining understanding that way takes longer than writing it myself does.

When you write the code, you understand it. When you read the code produced by an agent, you may eventually feel like you understand it, but it's not at the same deep level as if your own brain created it.

I'll keep using new tools, I'll keep writing my own code too. Just venting my frustrations with agentic coding because it's only going to get worse.


Yep. I had a few vibe coded projects that were fairly far along and then things broke. The code was so convoluted and it took me so long to understand that I just opted to rewrite everything from scratch without AI. Sure, it took longer but I understood all of it.

No joke: Maybe that is the current value of vibe coding. It helps you get started with a crappy version. In your experience, which one do think would take longer? (1) Vibe code until it breaks, then you rewrite everything from scratch or (2) Write everything from scratch. I don't vide code (yet?), but I do use LLMs to get ideas about how to solve problems and look at same code, especially when I don't what library function to call.

Yeah I do use it for prototyping when I just want to get some version working, so I'm not knocking that, but it's more so trying to warn pure vibe coders that they won't get far if they don't eventually buckle down and write code themselves, for the LLMs will break the code at some point.

You raise a great point here:

    > since I didn't write the code, in order to understand it I have to read it. But gaining understanding that way takes longer than writing it myself does.
I remember reading Joel Spolsky's blog 25 years ago, and he wrote something like: "It is harder to read code than to write code." I was quite young at that stage in my programming journey, but I remember being simultaneously surprised and relieved -- to know that reading code was so damn hard! I kept thinking if I just worked harder at reading code that eventually it would be as easy as writing code.

“Indeed, the ratio of time spent reading versus writing is well over 10 to 1. We are constantly reading old code as part of the effort to write new code. ...[Therefore,] making it easy to read makes it easier to write.” ― Robert C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship

Also:

Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

— Brian Kernighan: The Elements of Programming Style, 2nd edition, chapter 2

In summary: write simple code, it's easy to read and understand - by future you who forgot why you did something and others.


that idea is somewhat true - if you work harder, reading code definitely becomes easier

Reading the post and some of the comments here, looks to me like people are having bigger problems with how other people use AI than with AI itself.

The protocol should be that they hand off the prompts/context and not the generated code.

Just want to highlight that gmnz.xyz domain is on UCEPROTECTL3 blacklist.

IMO, UCEPROTECTL3 is a scam and it's not worth even acknowledging their existence. https://www.reddit.com/r/sysadmin/comments/eur4ju/removal_fr... among many other similar posts.

https://www.uceprotect.net/en/index.php?m=7&s=8 -- "pay us to fix a problem that we've caused, and if you have the gall to call it what it is (extortion), then we'll publish your email and be massive dicks about it"

(To be clear, not all spam blacklists are scams - just UCEPROTECTL3 specifically)


You may be right, and I'd be happy to ignore it, but unfortunately, my work laptop has baked it in. I assume many other companies are using it as well.

Baked in?? Are you saying that your work laptop uses email blacklists as a way to block you from e.g. visiting a website? Please say more. Also where do you work so that I know where I never want to work?

I apologize for my sloppy use of words.

What I'm saying is that I can't access this website from my work laptop - it shows me branded blocked page.

I'm not 100% sure, but I think there is a policy set up on Zcaler, blocking access to the domains defined in some sort of blacklist. The reason why I assumed it's UCEPROTECTL3 is because it's the only positive result I got at online blacklist lookup against gmnz.xyz.

And no, I don't feel comfortable sharing my employer.


PRs and branch rules are a thing.

I see it as a reason why we’re going to remain employable for a while.

You have to drive the LLM, you cannot let it drive. Therefore you still need to code.

There is no best practices anymore, no proper process, no meaningful back and forth.

There absolutely is and you need to work with the tools to make sure this happens. Else chaos will ensue.

Been working with these things heavily for development for 6-12 months. You absolutely must code with them.


Their issue is their clients trying to push AI garbage, not themselves. And that's a business professionalism boundary issue, not an AI one.

I was responding to the fact they are saying there is no best practices etc, as I quoted.

You are right. They are powerful tools when in the hands of a skilled craftsman.

u should abuse the non technical person sending you code. not say "thanks, though"

What does this mean? How are you going to abuse them? Something is missing here...

The only thing that matters anymore in corporate is: does the code solve the problem.

Also, is it just me or has the feeling of victory gone away completely 100% ever since AI became a thing? I used to sweat and struggle, and finally have my breakthrough, the "I'm invicible!" Boris moment before the next thing came into my task inbox.

I don't feel that high anymore. I only recently realized this.


Honestly the emoji thing I like, it helps me read log output.

This is all over linkedin now. Basically, idea bros manage to get their ideas seemingly working with vibe coding but the moment it breaks they expect they can just "pay to fix the small broken part" and get back to work quickly. Not realising the cost that the developer has to get up to date on the project, then probably fix a mountain of poorly done, insecure work to "quickly finish" the project. A lot of them are also scammers and try to get you to start work on it without even having an contract.

Not really worth working on any of these project.


“I made a chat app using Claude in two hours”

Ah yes a supabase backed, hallucinated data model with random shit, using deprecated methods, and a copy paste UI. Zero access control or privacy, 1% of features, no files uploading or playback or calling.

“Can you scale this to 1M users by end of the week? Something similar to WhatsApp or Telegram or Signal”

Sybau mf


>Sybau mf

What does this mean?


“Shut yo bitch ass up, mothafucka.”

Like a neck tattoo, but the text form.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: