Hacker Newsnew | past | comments | ask | show | jobs | submit | baggachipz's commentslogin

The carousel of ~progress~ financing continues to turn....

Given that only one of the dogs can talk, you're set to get only one answer. Though I suspect that the ability to talk bestows bodily shame, based on this anecdotal evidence.

The levels of pettiness in this administration know no bounds. I'm sure they'll forbid the use of "woke", and require all government employees to say "I terminated sleep this morning".

> The levels of pettiness in this administration know no bounds

https://www.theatlantic.com/ideas/archive/2018/10/the-cruelt...


What an odd take. Every administration does this sort of petty stuff. nothing new under the sun.

This is demonstrably false. Previous administrations have not. It used to be normal to do things like keeping cabinet members appointed by their opponents or not put up a mocking picture of your predecessor in the white house.

> It used to be normal to do things like keeping cabinet members appointed by their opponents

This particular thing was not all that common between Presidents who succeed normally by election. I think the most recent was Robert Gates serving as SecDef across the Bush II/Obama transition, before that there were five kept across the Reagan/Bush I transition, and no more in the post-WWII period.

(It’s true that the pettiness level in this Administration is unprecedented, but this is not a valid example.)


True, I didn’t mean it was routine but it was somewhat normal. I just wanted to show the incredible range of professional behaviour that has disappeared.

Petty as in 'small and does not really matter' or petty as in 'vindictive'. All administrations do many small things that may not ultimately have much impact, but often those may be for benign reasons. Understanding the reasoning behind the decisions would help in determining what kind of 'petty' this is.

Absolutely vindictive. He goes out of his way to cite "DEI" in his comments.

Both.

It's so utterly juvenile and unprofessional. The kind of thing a petulant twelve year-old does for attention.


I will never understand Europe's obsession with using WhatsApp. They've had a long time to switch off it for a better and less evil product.

It's not an obsession, it's network effects. I say this as someone whole mostly uses Telegram and Signal and has requested friends to text me on those apps instead of Whatsapp. But most people don't want to have several apps and to have to choose which is the correct one to contact each friend. So the status quo seems to be Whatsapp for people they only have the phone number, and Instagram for the rest.

What's so hard about not using Meta products? I manage to not use them every single day. There are dozens of us!

That's not what ericmay said.

I don't use Meta products, and haven't for many years. But I still have a Facebook account, because a) deleting it would be a fairly rigorous process, and b) as long as I maintain the account, I have some control over the information about me that Meta maintains; if I deleted the account, they would maintain a "shadow profile" for me that I had no control over, and (for instance) any photos tagged as containing me, I would not be able to go in and untag.


Source: "trust us bro"

That often wrong and unnecessary AI bullshit ain't gonna pay for itself!

Good thing the industry has moved on to another promising niche technology which is being hyped and ruined by greed.

One need only look at 1929 to understand what's in store. Of course, the rich/powerful will say "who could have seen this coming?"

That S1 is gonna make for a fun read. It'll make Adam Neumann blush.

Because of unprofitability? ARR and growth are very high, and margins are either good or can soon become good.

Is the claim that coding agents can't be profitable?


> margins are either good or can soon become good.

Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.

https://www.viberank.app


That site seems to date from the days before there were real usage limits on Claude Code. Note that none of the submissions are recent. As such, I think it's basically irrelevant - the general observation is that Claude Code will rate limit you long, long before you can pull off the usage depicted so it's unlikely you can be massively net-profit-negative on Claude Code.

Do you mind giving a bit more details in layman's terms about this assuming the $60k per subscriber isn't hyperbole? Is that the total cost of the latest training run amortized per existing subscriber plus the inference cost to serve that one subscriber?

If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.


It counted up the tokens that users on “unlimited” Max/Pro plans consumed through CC, and calculated what it would cost to buy that number of tokens through the API.

$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.

Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.

It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.


So they're now putting in aggressive caps and the other two paths they have to address the gap is to drive the/their cost of those tokens way down and/or the user pays many multiples of their current subscription. That's not to say that's odd for any business to expect their costs to decrease substantially and their pricing power to increase, but even if the gap is "only" low thousands to $200 that's...significant. Thanks for the insight.

> margins are either good or can soon become good

This is always the pitch for money-losing IPOs. Occasionally, it is true.


let's see them then

Dario Amodei gives of strong Adam Neumann vibes. He claimed "AI will replace 90% of developers within 6 months" about a year ago...

It was "writing 90% of the code", which seems to be pretty accurate, if not conservative, for those keeping up with the latest tools.

> which seems to be pretty accurate

It's not, even by his own citing: https://www.youtube.com/watch?v=iWs71LtxpTE

He said that this applies to "many teams" rather than "uniformly across the whole company".


Yes, those using the tools use the tools, but I don't really see those developers absolutely outpacing the rest of developers who do it the old fashioned way still.

I think you're definitely right, for the moment. I've been forcing myself to use/learn the tools almost exclusively for the past 3-4 months and I was definitely not seeing any big wins early on, but improvement (of my skills and the tools) has been steady and positive, and right now I'd say I'm ahead of where I was the old-fashioned way, but on an uneven basis. Some things I'm probably still behind on, others I'm way ahead. My workflow is also evolving and my output is of higher quality (especially tests/docs). A year from now I'll be shocked if doing nearly anything without some kind of augmented tooling doesn't feel tremendously slow and/or low-quality.

it’s wild that engineers need months or years to properly learn programming languages but dismiss AI tooling after one bad interaction

I think inertia and determinism play roles here. If you invest months in learning an established programming language, it's not likely to change much during that time, nor in the months (and years) that follow. Your hard-earned knowledge is durable and easy to keep up to date.

In the AI coding and tooling space everything seems to be constantly changing: which models, what workflows, what tools are in favor are all in flux. My hesitancy to dive in and regularly include AI tooling in my own programming workflow is largely about that. I'd rather wait until the dust has settled some.


totally fair. I do think a lot of the learnings remain relevant (stuff I learned back in April is still roughly what I do now), and I am increasingly seeing people share the same learnings; tips & tricks that work and whatnot (i.e. I think we’re getting to the dust settling about now? maybe a few more months? definitely uneven distribution)

also FWIW I think healthy skepticism is great; but developers outright denying this technology will be useful going forward are in for a rude awakening IMO


Motivated reasoning combined with incomplete truths is the perfect recipe for this.

I kind of get it, especially if you are stuck on some shitty enterprise AI offering from 2024.

But overall it’s rather silly and immature.


That's not even close. The keyboard is writing 100% of my code. They keyboard is not replacing me anytime soon.

If you added up all the code written globally on Dec 3 2025, how much do you think was written by AI and how much was clacked out on a keyboard?

And 12 months later Anthropic is listing 200 open positions for humans: https://www.anthropic.com/jobs

Of course they are. The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?

This idea that “AI writes 90% of our code” means you don’t need developers seems to spring from a belief that there is a fixed amount of software to produce, so if AI is doing 90% of it then you only need 10% of the developers. So far, the world’s appetite for software is insatiable and every time we get more productive, we use the same amount of effort to build more software than before.

The point at which Anthropic will stop hiring developers is when AI meets or exceeds the capabilities of the best human developers. Then they can just buy more servers instead of hiring developers. But nobody is claiming AI is capable of that so far, so of course they are going to capitalise on their productivity gains by hiring more developers.


If AI is making developers (inside Anthropic or out) 10x more productive... where's all the software?

I'm not an LLM luddite, they are useful tools, but people with vested interests make a lot of claims that if they were true would result in a situation where we should already be seeing the signs of a giant software renaissance... and I just haven't seen that. Like, at all.

I see a lot more blogging and influncer peddling about how AI is going to change everything than I do any actual signs of AI changing much of anything.


How much software do you think happened at Google internally during its first 10 years of existence that never saw outside light? I imagine that they have a lot of internal projects that we have no idea they even need.

But this AI boom is supposedly lifting all boats, internal and external.

That's the hype being sold. So where's the software...?

And again, I'm not anti-LLM. But I still think the hype around them is far, far greater than their real impact.



Here's the claim again for you:

> AI will replace 90% of developers within 6 months


You said:

> The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?

Let's review the original claim:

> AI will replace 90% of developers within 6 months

Notice that the original claim does not say "developers will remain the same amount, they will just be 10x more effective". It says the opposite of what you claim it says. The word "replace" very clearly implies loss of job.


> Let's review the original claim:

> > AI will replace 90% of developers within 6 months

That’s not the original claim though; that’s a misrepresentative paraphrase of the original claim, which was that AI will be writing 90% of the code with a developer driving it.


Huh. You seem to be right. It seems I was responding to a comment which misquoted Dario.

that’s not what he claimed, just to be clear. I’m too lazy to look up the full quote but not lazy enough to not comment this is A) out of context B) mis-phrased as to entirely misconstrue the already taken-out-of-context quote

I think it was also back in March, not a year ago


https://www.businessinsider.com/anthropic-ceo-ai-90-percent-... (March 2025):

>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.

>Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.

>"But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then, we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry," Amodei said.

I think it's a silly and poorly defined claim.


you’re once again cutting the quote short — after “all of the code” he has more to say that’s very important for understanding the context and avoiding this rage-bait BS we all love to engage in

edit: sorry you mostly included it paraphrased; it does a disservice (I understand it’s largely the media’s fault) to cut that full quote short though. I’m trying to specifically address someone claiming this person said 90% of developers would be replaced in a year over a year ago, which is beyond misleading

edit to put the full quote higher:

> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"


can you post the full quote then? He has posted what the rest of us read

I believe:

> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"

from https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn

(sorry have been responding quickly on my phone between things; misquotes like this annoy the fuck out of me)


[dead]


uh it proves the original comment I responded to is extremely misleading (which is my only point here); CEO did not say 90% of developers would be replaced, at all

Is this the new 'next year is the year of the Linux desktop'?

that wework s1 was gold

Elevating the world's consciousness! https://www.wework.com/newsroom/wecompany

It was so bad a lot of folks thought it was fake when first released! People couldn’t believe WeWork was actually that clueless about how six a thing would land.

SoftBank is just waiting to invest in this …

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: