I don't think the analogy holds up at all. A doctor usually has a very small time window to deal with your problem and then switches to the next patient.
If I'm working on your project I'm usually dedicated to it 8 hours a day for months.
I do agree this is not new, I had clients with some development experience come up with off the cuff suggestions that just waste everyone's time and are really disrespectful (like how bad at my job do you think I am if you think I didn't try the obvious approach you came up with while listening to the problem). But AI is going to make this much worse.
I'd take the other side for most of these - Nvidia one is too vague (some could argue it's already seeing "heavy competition" from Google and other players in the space) but something more concrete - I doubt they will fall below 50% market share.
Build cache, packages and a few of other things get messed up when switching branches - if you need to do a "quick bug-fix and get back into the main thing" worktrees are really nice.
I see. The projects I have been working on in the last few years don't take very long to compile, so build cache has not been a very big factor for me. I could see it being more important for projects that take a long time to build though.
In git you can have only one worktree per branch. For example, if you have a worktree on main you cannot have another one on main.
I personally find this annoying. I usually like to keep one pristine and always current working copy of main (and develop if applicable) around for search and other analysis tasks[1]. Worktrees would be ideal and efficient but due to the mentioned restriction I have to either waste space for a separate clone or do some ugly workarounds to keep the worktree on the branch while not keeping it on the branch.
jujutsu workspace are much nicer in that regard.
[1] I know there are tons of ways search and analyze in git but over the years I found a pristine working copy to be the most versatile solution.
You probably know this, but for others that don't: local git clones will share storage space with hardlinks for the objects in .git. The wasted space wouldn't be a doubling, it would be the work tree twice plus the (small) non-hardlinked bits under .git. No idea how LFS interacts with this, but it can be worth knowing about this mechanism.
Also, if you end up relying on it for space reasons, worth knowing that cloning from a file:// url switches the hardlink mechanism off so you end up with a full duplicate again.
> In git you can have only one worktree per branch. For example, if you have a worktree on main you cannot have another one on main.
You can detach the worktree from the repo, and checkout multiple branches at the same time to different locations. Not sure if this also allows checking out the same branch to multiple locations at the same time. You can also have a swallow clone, so you don't have to waste space for the full repos history. So at the end you still have to waste space for each worktree, but this isn't something jujutsu can avoid either, or can it?
This restriction of git worktrees is annoying but I just learned one simple rule to follow:
Never check out the main development branch (main/master/develop/etc) in other worktrees (non "main worktree", using git-worktree nomenclature)). Use other name with "wt-" prefix for it. Like in:
And to be honest, after being disciplined to always do that, I very rarely get error message saying that the branch I want to check out is already checked out in other worktree. Before that, I regularly had a situation when I checked out main branch on second worktree to see the current state of the repo (because my main worktree had a work in progress stuff) and after some time when I finished work on main branch, I tried to check out main branch on my main worktree and got the error. Because I totally forgot that I checked it out some time ago in the other worktree.
That sounds like a nice improvement, just like many other aspects of jj!
Tools should adapt to us and not the other way around, but if you are stuck with git, there's a slightly different workflow that supports your use case: detached head. Whenever I check out branches that I don't intend on committing to directly, I checkout e.g. origin/main. This can be checked out in many worktrees. I actually find it more ergonomic and did this before using worktrees: there are no extra steps in keeping a local main pointer up to date.
The detached head is what I meant with keeping it on the branch while not keeping it on the branch.
The complication comes from trying to stay current. With a regular worktree I could just pull, but now I have to remember the branch, fetch all and reset hard to the remembered branch.
> In git you can have only one worktree per branch.
Well, that is true, but a branch is nothing more than an automoving label, so I don't see how that is limiting at all. You can have as many branches as you like and you can also just checkout the commit.
This is kind of unfortunate in this case as it breaks some tooling since the extra trees are not collocated with git, like editor inline history/blame or agents that know to look in git history to fix their mistakes
I think the biggest benefits of colocation are, in rough approximation of the order I encounter them:
1) Various read-only editor features, like diff gutters, work as they usually do. Our editor support still just isn't there yet, I'm afraid.
2) Various automation that tends to rely on things like running `git` -- again, often read-only -- still work. That means you don't have to go and do a bunch of bullshit or write a patch your coworker has to review in order to make your ./run-all-tests.sh scripts work locally, or whatever.
3) Sometimes you can do some kind of nice things like run `git fetch $SOME_URL` and then `git checkout FETCH_HEAD` and it works and jj handles it fine. But I find this rare; I sometimes use this to checkout GitHub PRs locally though. This could be replaced 99% for me by having a `jj github` command or something.
The last one is very marginal, admittedly. Claude I haven't had a problem with; it tends to use jj quite easily with a little instruction.
To be technical, it's more that it can read and write the on-disk Git format directly, like many other tools can.
I think the easiest way to conceptualize it is to think of Git and jj as being broken down into three broad "layers": data storage, algorithms, user interface. Jujutsu uses the same data storage format as Git -- but each of them have their own algorithms and user interface built atop that storage.
You can still use git worktrees in a colocated repository. jj workspaces are a different, but similar capability that provide some extra features, at the cost of some others.
> there is no difference as jj is only a frontend to git.
It's easy to get this impression because git is so dominant, so most people using jj use it as a frontend to git.
But jj is a fully self-contained version control system. It always stores all its commits, branches, and other repo metadata in the .jj directory. You can use it standalone like this without ever using git.
Git integration is optional, and works by importing from or exporting to Git. Internally, jj still manages its own history. Git support is just a bridge.
jj may use git as (one of) its backing stores, and its collocation offers some compatibility at the cost of important tradeoffs, but it isn’t intended to be a git frontend.
I recently bought their cloud fiber gateway and two in wall wifi 7 access points because I'm setting up a network in my new apartment and hear this multiple times.
Honestly they are nothing like Apple - like just look at their mobile apps - how many do they have - 10 ? To interact with the same gateway just for slightly different use-cases. Not to mention that the functionalities are hard to decipher
Apple is going to be even more profitable in the consumer space because of RAM prices ? I feel like they are the only player to have the supply chain locked down enough to not get caught off guard, have good prices locked in enough in advance and suppliers not willing to antagonize such a big customer by backing out of a deal.
They used to, but they've caught up. The flagship iPhone 17 has 12GB RAM, the same as the Galaxy S25. Only the most expensive Z Fold has more, with 16GB.
RAM pricing segmentation makes Apple a lot of money, but I think they scared themselves when AI took off and they had millions of 4GB and 8GB products out in the world. The Mac minimum RAM specs have gone up too, they're trying to get out of the hole they dug.
People always make this argument. But could you please expand on what you think is actually in memory?
code:data, by and large I bet that content held in ram takes up the majority of space. And people have complained about the lack of ram in iPhones for ages now, particularly with how it affects browsers.
Tim Cook is the Supply Chain Guy. He has been for decades, before he ever worked at Apple. He does everything he can to make sure that Apple directly controls as much of the supply chain as possible, and uses the full extent of their influence to get favorable long-term deals on what they don't make themselves.
In the past this has resulted in stuff like Samsung Display sending their best displays to Apple instead of Samsung Mobile.
There are ecosystems that have package managers but also well developed first party packages.
In .NET you can cover a lot of use cases simply using Microsoft libraries and even a lot of OSS not directly a part of Microsoft org maintained by Microsoft employees.
2020 State of the Octoverse security report showed that .NET ecosystem has on average the lowest number of transitive dependencies. Big part of that is the breadth and depth of the BCL, standard libraries, and first party libraries.
The .NET ecosystem has been moving towards a higher number of dependencies since the introduction of .NET Core. Though many of them are still maintained by Microsoft.
The "SDK project model" did a lot to reduce that back down. They did break the BCL up into a lot of smaller packages to make .NET 4.x maintenance/compatibility easier, and if you are still supporting .NET 4.x (and/or .NET Standard), for whatever reason, your dependency list (esp. transitive dependencies) is huge, but if you are targeting .NET 5+ only that list shrinks back down and the BCL doesn't show up in your dependency lists again.
Even some of the Microsoft.* namespaces have properly moved into the BCL SDKs and no longer show up in dependency lists, even though Microsoft.* namespaces originally meant non-BCL first-party.
I think first-party Microsoft packages ought to be a separate category that is more like BCL in terms of risk. The main reason why they split them out is so that they can be versioned separately from .NET proper.
This was true since Claude Sonnet 3.5, so over a year now. I was early on the LLM train building RAG tools and prototypes in the company I was working at the time, but pre Claude 3.5 all the models were just a complete waste of time for coding, except the inline autocomplete models saved you some typing.
Claude 3.5 was actually where it could generate simple stuff. Progress kind of tapered off since tho, Claude is still best but Sonnet 4.5 is disappointing in that it does't fundamentally bring me more than 3.5 did it's just a bit better at execution - but I still can't delegate higher level problems to it.
Top tier models are sometimes surprisingly good but they take forever.
Not really - 3.5 was the first model where I could actually use it to vibe through CRUD without it vasting more time than it saves, I actually used it to deliver a MVP on a side gig I was working on. GPT 4 was nowhere near as useful at the time. And Sonnet 3 was also considerably worse.
And from reading through the forums and talking to co-workers this was a common experience.
Up until using claude 4.5 I had very poor experiences with C/C++. Sure, bash and python worked ok, albeit the occasional hallucination. ChatGPT-5 did ok with C/C++ and fairly well with python(again having issues with omitting code during iterations, requiring me to yell at it a lot). Claude 4.5 just works and it's crazy good.
You can't build large ML models without swaths of data, and GDPR is the antitheses of collecting data. Therefore countries/companies that don't have to abide by it are at an obvious advantage.
If anything this is coming from political elite being convinced that AI research is a critical topic, EU recognizing it's weak because of the self-imposed handicaps and trying to move past that. I'd be shocked if we manage to do anything concrete on the matter TBH.
If I'm working on your project I'm usually dedicated to it 8 hours a day for months.
I do agree this is not new, I had clients with some development experience come up with off the cuff suggestions that just waste everyone's time and are really disrespectful (like how bad at my job do you think I am if you think I didn't try the obvious approach you came up with while listening to the problem). But AI is going to make this much worse.
reply