I was expecting this to be the point of the article when I saw the title. Popular projects appear to be drowning in PRs that are almost certainly AI generated. OpencodeCli has 1200 open at the moment[1]. Aider, which is sort of abandoned has 200 [2]. AFAIK, both projects are mostly one maintainer.
All the use cases that popped into my head when I saw this were around how nice it would be to be able to quickly see what was really happening without trying to flop between logs and the AWS console. That's really how I use k9s and wouldn't be able to stand k8s without it. I almost never make any changes from inside k9s. But yeah... I could see using this with a role that only has Read permissions on everything.
This is sort of why I think software development might be the only real application of LLMs outside of entertainment. We can build ourselves tight little feedback loops that other domains can't. I somewhat frequently agree on a plan with an LLM and a few minutes or hours later find out it doesn't work and then the LLM is like "that's why we shouldn't have done it like that!". Imagine building a house from scratch and finding out that it was using some american websites to spec out your electric system and not noticing the problem until you're installing your candadian dishwasher.
Thats why those Engineering fields have strict rules, often require formal education and someone can even end up in prison if screws up badly enough.
Software is so much easier and safer, till very recently anonymous engineering was the norm and people are very annoyed with Apple pushing for signing off the resulting product.
Highly paid software engineers across the board must have been an anomaly that is ending now. Maybe in the future only those who code actually novel solutions or high risk software will be paid very well - just like engineers in the other fields.
> people are very annoyed with Apple pushing for signing off the resulting product.
Apple is very much welcome to push for signing off of software that appears on their own store. That is nothing new.
What people are annoyed about is Apple insisting that you can only use their store, a restriction that has nothing to do with safety or quality and everything to do with the stupendous amounts of money they make from it.
It's literally the case of Apple requiring signing the binary to run on the platforms they provide, Apple doesn't have say on other platforms. It is a very similar situation with local governments.
Also, people complain all the time about rules and regulations for making stuff. Especially in EU, you can't just create products however you like and let people decide if it is safe to use, you are required to make your products to meet certain criteria and avoid use certain chemicals and methods, you are required to certify certain things and you can't be anonymous. If you are making and selling cupcakes for example and if something goes wrong you will be held responsible. Not only when things go wrong, often local governments will do inspections before letting you start making the cupcakes and every now and then they can check you out.
Software appears to be headed to that direction. Of course du to the nature of software probably wouldn't be exactly like that but IMHO it is very likely that at least having someone responsible for the things a software does will become the norm.
Maybe in the future if your software leaks sensitive information for example, you may end up being investigated and fined if not following best practices that can be determined by some institute etc.
That's not a very compelling counterexample, when you consider how often countries with governments force other countries with government to do as they want, often with nothing but economic or soft power.
> Apple is very much welcome to push for signing off of software that appears on their own store.
Just to be clear, apps have to be notarized/signed to run on an Apple device. For macOS, notorized apps aren't required to be distributed in the App Store. Due to sandbox restrictions, some dev tools are distributed independently.
Or there are two versions: a less capable version for the App Store and a more capable version distributed independently.
Software developers being paid well is result of demand, not be cause it's very hard.
Skill and strictness required is only vaguely related to pay, if there is enough people for the job it won't pay amazing, regardless on how hard it is.
> Software is so much easier and safer, till very recently anonymous engineering was the norm and people are very annoyed with Apple pushing for signing off the resulting product.
that has nothing to do with engineering quality, that is just to make it harder to go around their ecosystem (and skip paying the shop fee). With additional benefit of signed package being harder to attack. You can still deliver absolute slop, but the slop will be from you, not the middleman that captured the delivery process
I don't understand why the experience you describe would lead you to conclude that LLMs might be useful for software development.
The response "that's why we shouldn't have done it like that!" sounds like a variation on the usual "You're absolutely right! I apologize for any confusion". Why would we want to get stuck in a loop where an AI produces loads of absolute nonsense for us to painstakingly debug and debunk, after which the AI switches track to some different nonsense, which we again have debug and debunk, and so on. That doesn't sound like a good loop.
> This is sort of why I think software development might be the only real application of LLMs outside of entertainment.
Wow. What about also, I don't know, self-teaching*? In general, you have to be very arrogant to say that you've experienced all the "real" applications.
* - For instance, today and yesterday, I've been using LLMs to teach myself about RLC circuits and "inerters".
I would absolutely not trust an LLM to teach me anything alone. I've had it introduce ideas I hadn't heard about which I looked up from actual sources to confirm it was a valid solution. Daily usage has shown it will happily lead you down the wrong path and usually the only way to know that it is the wrong path, is if you already knew what the solution should be.
LLMs MAY be a version of office hours or asking the TA, if you only have the book and no actual teacher. I have seen nothing that convinces me they are anything more than the latest version of the hammer in our toolbox. Not every problem is a nail.
> LLMs MAY be a version of office hours or asking the TA
In my experience, most TA's are not great at explaining things to students. They were often the best student in their class, and they can't relate to students who don't grasp things as easily as they do--"this organic chemistry problem set is so easy; I don't know why you're not getting it."
But an LLM has infinite patience and can explain concepts in a variety of ways, in different languages and at different levels. Bilingual students that speak English just fine, but they often think and reason in their native language in their mind. Not a problem for an LLM.
A teacher in an urban school system with 30 students, 20 of which need customized lesson plans due to neurological divergence can use LLMs to create these lesson plans.
Sometimes you need things explained to you like you're five years old and sometimes you need things explained to you as an expert.
On deeper topics, LLMs give their references, so a student can and should confirm what the LLM is telling them.
Self-teaching pretty much doesn't work. For many decades now, the barrier has not been access to information, it's been the "self" part. Turns out most people need regimen, accountablity, strictness, which AI just doesn't solve because it's yes-men.
It's not bogus at all. We've had access to 100,000x more information than we know what to do with for a while now. Right now, you can go online and learn disciplines you've never even heard of before.
So why arent you a master of, I don't know, reupholstery? Because the barrier isn't information, it's you. You're the bottle neck, we all are, because we're humans.
And AI really just does not help here. It's the same problem with professor Google, I can just turn off the computer, and I will. This is how it is for the vast majority of people.
Most people who claim to be self taught aren't even self taught. They did a course or multiple courses. Sure, it's not traditional college, but thats not self taught.
It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic. No doubt you can learn some true things, but you’ll also learn some blatant falsehoods and a lot of incorrect theory. And you won’t know which is which.
One of the most important factors in actually learning something is humility. Unfortunately, LLM chatbots are designed to discourage this in their users. So many people think they’re experts because they asked a chatbot. They aren’t.
I think everything you said was true 1-2 years ago. But the current LLMs are very good about citing work, and hallucinations are exceedingly rare. Gemini for example frequently directs you to a website or video that backs up it's answer.
> It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic
It's delusional and very arrogant of you to confidently asserts anything without proof: A topic like RLC circuits has got a body of rigorous theorems and proofs underlying it*, and nothing stops you from piecing it together using an LLM.
* - See "Positive-Real Functions", "Schwarz-Pick Theorem", "Schur Class". These are things I've been mulling over.
Thinking about this some more, maybe I wasn't considering simulators (aka digital twins), which are supposed to be able to create fairly reliable feedback loops without building things in reality. Eg will this plane design be able to take off? Still, I feel fortunate I only have to write unit tests to get a bit of contact with reality.
I think of it as you say "install dishwasher" and it plan looks like all the steps but as it builds it out it somehow you end up hiring a maid and buying a drying rack.
I was hoping Bash would go away or get replaced at some point. It's starting to look like it's going to be another 20 years of Bash but with AI doodads.
Nushell scratches the itch for me 95% of the time. I haven't yet convinced anybody else to make the switch, but I'm trying. Haven't yet fixed the most problematic bug for my useage, but I'm trying.
I've never had that great of a memory. The upside is that you can have a bad memory and good note taking skills and be more effective than the 'good memory' people. Really it's just that I forget in a day what other people forget in a week so it's not that big of a gap. But some considerations:
1. Put everything in the issue tracker that you can. This includes notes on what actually happened when you did the work. Include technical details.
2. Try to push everyone else to use the issue tracker. Also makes you sound like the professional in the room.
3. Have a very lightweight note taking mechanism and use it as much as possible. I am gud at vim so I use the Voom plugin (which just treats markdown headings as an outline but it's enough to store a ton of notes in a single .md file). Don't try to make these notes good enough to share as that adds too much overhead.
4. Always take your own notes in a meeting.
5. I will revisit my notes on a project from time to time, and sometimes walk through all of them, but I'm not really treating them like flashcards to memorize. I'm just looking for things that might need some renewed attention. Same with the backlog.
6. In general, I don't try to improve my memory because I don't know what I need to know for a week vs. what I won't look at again for a year. So I focus on being systematic about having good-enough notes on everything and don't really expect to remember anything. (I do remember some things but it's random.)
> Have a very lightweight note taking mechanism and use it as much as possible... Don't try to make these notes good enough to share as that adds too much overhead.
Second this. I use sublime text almost exclusively for this purpose. I have one file called daily_notes.md that has everything from meeting notes to formal writing to pasted error messages and code.
Each day gets an h1 but that is the extent of formal organization. I’m actually decently organized (at work, at least) but the simplicity is all about lowering the overhead of jotting stuff down. Keeping everything in one doc makes for very easy search.
Otherwise, I try to write reminders right away with whatever is handy. Mainly: Post-its, slack reminders, and Gmail scheduled sends to myself.
Yes and: My life mgmt project notebook also has a habit tracker section, for all the life maintenance stuff.
Inspired by Seinfeld's "don't break the chain" calendar, but a lot more information dense. It's a big grid, tasks and day of month.
I make a hash mark for every completed task. The boxes are big enough for multiple hashes (eg walking dog 2x daily) and entering values (eg body weight).
And the implication is the 'quality' of engineers at the companies is actually reversed - the top performers at Dropbox are struggling and leaving while the under performers at FANG are struggling and leaving.
Another nuisance is that unencrypted port 80 must be open to the outside world to do the acme negotiation (LE servers must be able to talk to your acme client running at the subdomain that wants a cert). They also intentionally don't publish a list of IPs that LetsEncrypt might be coming from [1]. So opening firewall ports on machines that are specifically internal hosts has to be a part of any renewal scripts that run every X days. Kinda sucks IMO.
As these are internal hostnames, you're probably doing a DNS-01 challenge rather than HTTP-01. With DNS-01 you don't need to open up any ports for incoming HTTP connections; you just need to place a TXT record in the DNS for the domain.
and even with HTTP challenge you don't have to expose the host directly, but e.g. can copy the challenge response to a public webserver from the internal host or from a coordinator server.
Fair enough. Although that seems rather complicated for those of us just trying to get a quick cert for an internal host. The LetsEncrypt forums are full of this discussion:
There are lots of simple things that are normally easier to do in the web framework that are suddenly easier to do in the database (with the side effect that you can do DB optimizations much easier).
But the other consideration is that you likely need to do a lot with a reverse-proxy like traefik to have much control of what you are really exposing to the outside world. PostgREST is not Spring, it doesn't have explicit control over every little thing so you're likely to need something in front of it. Anyway, point is that having a simple Flask server with a few endpoints running wouldn't complicate the architecture very much b/c you are better off with something in front of it doing routing already (and ssl termination, etc).
I'm on a POC project that's using PostgREST and it's been extremely fast to get a big complicated data model working with an API in front of it. But I guess I don't get how to really use this thing in reality? What does devops look like? Do you have sophisticated db migrations with every deploy? Is all the SQL in version control?
I also don't really get where the users get created in postgres that have all the row-level permissions. The docs are all about auth for users that are already in there.
This is my personal experience with using PostgREST (I haven't had the full supabase experience yet):
> What does devops look like?
I usually spin PostgREST workers up in some kind of managed container service, like Google Compute Engine. PostgREST is stateless, so other than upgrades, you never really need to cycle the services. As for resources PostgREST is extremely lean, I usually try to run 4 to 8 workers per gigabyte of RAM.
> Do you have sophisticated db migrations with every deploy?
You can use whatever migration tool your want. Sqitch is quite popular. I've even worked on projects that were migrated by Django but PostgREST did the API service.
> Is all the SQL in version control?
Yes this is a good approach, but it means needing a migration tool to apply the migrations in the right order, this is what Sqitch does and many ORMy libraries have migration sort of half-baked in.
It's worth noting that because many of the objects that PostgREST deals with are views, which have no persistent state, the migration of the views can be decoupled from the migration of the persistent objects like tables. Replacing a view (with CREATE OR REPLACE VIEW) can be done very quickly without locking tables as long as you don't change the view's schema.
In Supabase we use a separate Auth server [0]. This stores the user in an `auth` schema, and these users can login to receive a JWT. Inside the JWT is a "role", which is, in fact, a PostgreSQL role ("authenticated") that has certain grants associated to it, and the user ID (a UUID).
Inside your RLS Policies you can use anything stored inside the JWT. My cofounder made a video [1] on this which is quite concise. Our way of handling this is just an extension of the PostgREST Auth recommendations: https://postgrest.org/en/v9.0/auth.html
[1] https://github.com/anomalyco/opencode/pulls [2] https://github.com/Aider-AI/aider/pulls
reply