He’s not talking about order of operations, he’s talking about floating point error, which will accumulate in different ways in each case, because floating point is an imperfect representation of real numbers
Yeap, the specific example wasn't important. I choose an example involving the order of operations and an integer overflow simply because it would be easy to discuss. (I have been out of the field for nearly 20 years now.) Your example of floating point errors is another. I also encountered artifacts from approximations for transcendental functions.
Choosing a "better" language was not always an option, at least at the time. I was working with grad students who were managing huge datasets, sometimes for large simulations and sometimes from large surveys. They were using C. Some of the faculty may have used Fortran. C exposes you the vulgarities of the hardware, and I'm fairly certain Fortran does as well. They weren't going to use a calculator for those tasks, nor an interpreted language. Even if they wanted to choose another language, the choice of languages was limited by the machines they used. I've long since forgotten what the high performance cluster was running, but it wasn't Linux and it wasn't on Intel. They may have been able to license something like Mathematica for it, but that wasn't the type of computation they were doing.
I didn't consider it an order of operations issue. Order of operations doesn't matter in the above example unless you have bad precision. What I was trying to say is that good calculators have plenty of precision.
But floating point error manifest in different ways. Most people only care about 2 to 4 decimals which even the cheapest calculators can do well for a good amount of consecutive of usual computations. Anyone who cares about better precision will choose a better calculator. So floating point error is remediable.
I think this is why the LLM era will not produce as much automation as people think.
We have had the ability to automate browser activities for a long time—but, online service providers don’t want to be behind a layer of automation, which is why captchas were invented.
Automating things on the Internet has never been a technology obstacle, it has been a social one.
I don’t see how anything has changed!
In fact I recently received an updated ToS from eBay saying I am not allowed to use an AI agent to buy stuff on their site. Just a matter of time until others follow suit!
Edit: I misunderstood what was happening. Thanks to the comment below for clarifying.
While I agree with you, thats not what this announcement is about. Anthropic wants to disallow programmatic use of their subscription plans for business reasons as a way to manage demand. They’re having outages, at least weekly, since March.
One of my favorite fun facts is that it’s nearly impossible to get a hamster drunk - their foraging method is to get, eg, grains and fruits and store them piled up underground in their burrow, where they of course ferment, so hamsters’ livers have become unreasonably good at metabolizing alcohol.
I would say that’s the teenage phase, infected by nihilism. The adult moves past that, finds that life is an acquired taste, and finds joy in the everyday.
No you don’t understand, they can’t accomplish the same by an informal policy.
Both Google and Amazon are government contractors. With the designation, they might have had to divest their positions in Anthropic and be unable to serve their models.
> I'm not sure that's how the supply chain risk thing works. AFAIK, it has to be part of the supply chain for the products delivered to the DoD to count. I don't think just because Amazon is unrelatedly involved with Anthropic, this forces them to sever that relationship. I'm not sure if Hegseth thinks otherwise, but it's entirely possible that he is wrong or that being wrong is expedient to his threats.
Domain knowledge as in non public aspects of the work you/ your workplace does. The AI tools are very good at whatever is public but very clueless about proprietary domains .Let's say you make CRUD apps about some confidential domain. Now the CRUD skills might be commodity but the confidential domain is even more important.
As long as there's internal documentation, which virtually every serious shop has, it can be processed and combined with AI. There are startups selling this product already. I've seen first hand some very narrowly focused domain knowledge becoming more accessible when you can ask the chatbot and the thing is right. It works.
Come to think of it, domain knowledge should be an LLMs strong suit as long as you can provide the right documentation, which is working pretty well already.
Right now the main issue I see with AI is that it doesn't do well with scaling. It's great for building demos and examples but you have to fix its code for real production work. But for how long?
In ERP software there are MLOCs without any technical documentation. And nobody would spend a dime to create one. So, the deep expert knowledge on how business processes are supposed to work (in full detail) and how they are implemented is mostly in the heads of a couple of people.
AI is most excellent at reading and understanding large codebases and, with some guidance docs, can easily reproduce accurate technical documentation. Divide and conquer.
Reading a large codebase...perhaps if it is not too large. Understanding... why a tool exists, what is the motivation for its design, what the external human systems requirements for successful utilization of the internal facing tools... especially when that knowledge exists only in the memories of a few developers and PMs... not so much.
Deep domain expertise is a long way from AI capability for effective replacement.
Again, nobody would spend a dime to create the technical documentation, even if it could be done somewhat faster with AI support. Also, in my experience AI is not so great explaining the consequences to business processes when documenting code.
Accuracy/faithfulness to the code as written isn't necessarily what you care about though, it's an understanding of the underlying problem. Just translating code doesn't actually help you do that.
No, current LLMs are already good enough to read the subtexts from documents, email, call transcripts where available. They're extremely good at identifying unwritten business practices, relationships, data flows, etc.
But everyone at the company has that private domain knowledge. The only thing you're bringing to the table that anyone in any other role doesn't offer is the commoditized skill set.
Right, and you'll not keep everything out of materials like AI
generated meeting notes for every repeat of every process so
the company doesn't really need many experts in its existing
operations.
Pre-LLMs, algorithmic knowledge was used as a proxy for skill difference at interview stage. In the workplace, you could google the implementation details and common gotchas. This was valuable knowledge.
Post-LLMs, the value of this (as differentiator) has dropped to zero. Domain knowledge (also known as business knowledge) is the obvious area to skill up on. It simply means knowledge about the area your organisation is working in. Whether it is yogurt delivery logistics, clothing manufacturing supply chain systems, etc. That's the real differentiator now. Anyone can invert a binary try in 5 minutes using an LLM. But designing a software system knowing well the domain your organisation is in is invaluable.
Right, bridging the gap of knowledge by getting closer to that of the clerical workers of the company, because pure software knowledge is no longer as valuable. That will probably make your salary closer to theirs, and that'll be a pretty big adjustment.
Can't speak to the OP, but lots of technical work (and frankly many trades are also technical) doesn't lend itself to text based documentation and teaching. Software, translation, non/fiction writing (like marketing and sales) all do. I think LLMs will take a significant part of those businesses, because I don't believe there is a Devon's Paradox for code -Tractors- Agents.
At the same time medicine, hardware design, good industrial, and specific domain knowledge (problems you solve in assembly or control loops) that are fundamentally proprietary and aren't well documented will continue to have value even when LLMs make solving the problems around them easier. Those might have increased leverage, at least for this round of LLMs. Now, maybe they succeed in World Models, but that is not today.
Really, I don't know what "kids these days" are going to do. I couldn't have predicted the influencer boom 15 years ago, but I also think there are geopolitical risks that are probably bigger than that shift, and "synergized" with the push to AI Everything, it doesn't look like a good time to be a learning/working human.
They were late to the game but are definitely investing more now.
They have three full EV's, in rough order of size: CH-R, BZ (previously called BZ4x), and BZ Woodland (basically a long station wagon version of the former).
Subaru is also selling a tweaked and rebadged version of each. I believe these are all made in Subaru factories with Toyota power-train components.
reply