I always thought of Go as low level and Rust as high level. Go has a lot of verbosity as a "better C" with GC. Rust has low level control but many functional inspired abstractions. Just try writing iteration or error handling in either one to see.
I wonder if it's useful to think of this as go is low type-system-complexity and rust is high type-system-complexity. Where type system complexity entails a tradeoff between the complexity of the language and how powerful the language is in allowing you to define abstractions.
As an independent axis from close to the underlying machine/far away from the underlying machine (whether virtual like wasm or real like a systemv x86_64 abi), which describes how closely the language lets you interact with the environment it runs in/how much it abstracts that environment away in order to provide abstractions.
Rust lives in high type system complexity and close to the underlying machine environment. Go is low type system complexity and (relative to rust) far from the underlying machine.
> Where type system complexity entails a tradeoff between the complexity of the language and how powerful the language is in allowing you to define abstractions.
I don't think that's right. The level of abstraction is the number of implementations that are accepted for a particular interface (which includes not only the contract of the interface expressed in the type system, but also informally in the documentation). E.g. "round" is a higher abstraction than "red and round" because the set of round things is larger than the set of red and round things. It is often untyped languages that offer the highest level of abstraction, while a sophisticated type system narrows abstraction (it reduces the number of accepted implementations of an interface). That's not to say that higher abstraction is always better - although it does have practical consequences, explained in the next paragraph - but the word "abstraction" does mean something specific, certainly more specific than "describing things".
How the level of abstraction is felt is by considering how many changes to client code (the user of an interface) is required when making a change to the implementation. Languages that are "closer to the underlying machine" - especially as far as memory management goes - generally have lower abstraction than languages that are less explicit about memory management. A local change to how a subroutine manages memory typically requires more changes to the client - i.e. the language offers a lower abstraction - in a language that's "closer to the metal", whether the language has a rich type system like Rust or a simpler type system like C, than a language that is farther away.
The way I understood the bit you quoted was not as a claim that more complex type system = higher abstraction level, but as a claim that a more complex type system = more options for defining/encoding interface contracts using that language. I took their comment as suggesting an alternative to the typical higher/lower-level comparison, not as an elaboration.
As a more concrete example, the way I interpreted GP's comment is that a language that is unable to natively express/encode a tagged union/sum type/etc. in its type system would fall on the "less complex/less power to define abstractions" side of the proposed spectrum, whereas a language that is capable of such a thing would fall on the other side.
> which includes not only the contract of the interface expressed in the type system, but also informally in the documentation
I also feel like including informal documentation here kind of defeats the purpose of the axis GP proposes? If the desire is to compare languages based on what they can express, then allowing informal documentation to be included in the comparison renders all languages equally expressive since anything that can't be expressed in the language proper can simply be outsourced to prose.
But that's why the word "abstraction" is the wrong choice. The ability of a language to express detail and the ability of a language to have high abstractions are two different things, and when we talk about high and low level languages, I claim that what we intuitively mean is abstraction, not the expressivity of contracts. For example, ATS's contracts are virtually unlimited in their expressivity (it makes Rust indistinguishable from C by comparison), yet few would say it's particularly high-level. On the other hand, Scheme or even JavaScript can express few contracts, and yet are considered high level. I think that when we think of a high-level language, what we have in mind is a language where programs typically need to concern themselves with fewer details. This corresponds more with abstraction rather than "contract expressiveness".
> The ability of a language to express detail and the ability of a language to have high abstractions are two different things, and when we talk about high and low level languages, I claim that what we intuitively mean is abstraction, not the expressivity of contracts.
I think you're right with respect to discussion about abstractions in the context of high-/low-level languages, but again, I feel like what GP was trying to get away from the high-/low-level framing in the first place and might have meant something different when they used the word "abstraction".
Perhaps this is me misinterpreting things, but I took GP's use of "abstraction" as something more along the lines of what it might mean in "this library's abstractions are designed poorly/well because they are easy/hard to misuse and/or understand". In that context I think "abstraction" is more about the precise interface contract and its quality - e.g., a poorly-chosen abstraction might not reflect the domain it ostensibly represents well because it permits actions/behaviors that don't make sense for that domain, and that in part might be due to a language being unable to express a more appropriate contract. I feel that better matches GP's high-/low-type-system-complexity axis.
Yep. This was the biggest thing that turned me off Go. I ported the same little program (some text based operational transform code) to a bunch of languages - JS (+ typescript), C, rust, Go, python, etc. Then compared the experience. How were they to use? How long did the programs end up being? How fast did they run?
I did C and typescript first. At the time, my C implementation ran about 20x faster than typescript. But the typescript code was only 2/3rds as many lines and much easier to code up. (JS & TS have gotten much faster since then thanks to improvements in V8).
Rust was the best of all worlds - the code was small, simple and easy to code up like typescript. And it ran just as fast as C. Go was the worst - it was annoying to program (due to a lack of enums). It was horribly verbose. And it still ran slower than rust and C at runtime.
I understand why Go exists. But I can't think of any reason I'd ever use it.
Rust gets harder with codebase size, because of borrow checker.
Not to mention most of the communication libraries decided to be async only, which adds another layer of complexity.
I strongly disagree with this take. The borrow checker, and rust in general, keeps reasoning extremely local. It's one of the languages where I've found that difficulty grows the least with codebase size, not the most.
The borrow checker does make some tasks more complex, without a doubt, because it makes it difficult to express something that might be natural in other languages (things including self referential data structures, for instance). But the extra complexity is generally well scoped to one small component that runs into a constraint, not to the project at large. You work around the constraint locally, and you end up with a public (to the component) API which is as well defined and as clean (and often better defined and cleaner because rust forces you to do so).
I work in a 400k+ LOC codebase in Rust for my day job. Besides compile times being suboptimal, Rust makes working in a large codebase a breeze with good tooling and strong typechecking.
I almost never even think about the borrow checker. If you have a long-lived shared reference you just Arc it. If it's a circular ownership structure like a graph you use a SlotMap. It by no means is any harder for this codebase than for small ones.
Disagree, having dealt with +40k LoC rust projects, bottow checker is not an issue.
Async is an irritation but not the end of the world ... You can write non asynchronous code I have done it ... Honestly I am coming around on async after years of not liking it... I wish we didn't have function colouring but yeah ... Here we are....
Funny, I explicitly waited to see async baked in before I even started experimenting with Rust. It's kind of critical to most things I work on. Beyond that, I've found that the async models in rust (along with tokio/axum, etc) have been pretty nice and clean in practice. Though most of my experience is with C# and JS/TS environments, the latter of which had about a decade of growing pains.
I still regularly use typescript. One problem I run into from time to time is "spooky action at a distance". For example, its quite common to create some object and store references to it in multiple places. After all, the object won't be changed and its often more efficient this way. But later, a design change results in me casually mutating that object, forgetting that its being shared between multiple components. Oops! Now the other part of my code has become invalid in some way. Bugs like this are very annoying to track down.
Its more or less impossible to make this mistake in rust because of how mutability is enforced. The mutability rules are sometimes annoying in the small, but in the large they tend to make your code much easier to reason about.
C has multiple problems like this. I've worked in plenty of codebases which had obscure race conditions due to how we were using threading. Safe rust makes most of these bugs impossible to write in the first place. But the other thing I - and others - run into all the time in C is code that isn't clear about ownership and lifetimes. If your API gives me a reference to some object, how long is that pointer valid for? Even if I now own the object and I'm responsible for freeing it, its common in C for the object to contain pointers to some other data. So my pointer might be invalid if I hold onto it too long. How long is too long? Its almost never properly specified in the documentation. In C, hell is other people's code.
Rust usually avoids all of these problems. If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I wholeheartedly concur based on my experience with Rust (and other languages) over the last ~7 or so years.
> If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information: who owns an object, what other callers can do with that object, the lifetime of that object in relation to other objects. And critically, in safe Rust, these are _guarantees_, which is the essence of real abstraction.
In large and/or complicated codebases, this kind of information is critical in languages without garbage garbage collection, but even when I program in languages with garbage collection, I find myself wanting this information. Who is seeing this object? What do they know about this object, and when? What can they do with it? How is this ownership flowing through the system?
Most languages have little/no language-level notion of these concepts. Most languages only enforce that types line up nominally (or implement some name-identified interface), or the visibility of identifiers (public/private, i.e. "information hiding" in OO parlance). I feel like Rust is one of the first languages on this path of providing real program dataflow information. I'm confident there will be future languages that will further explore providing the programmer with this kind of information, or at least making it possible to answer these kinds of questions easier.
> I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information
Your paraphrasing reminds me a bit of structured vs. unstructured programming (i.e., unrestricted goto). Like to what you said, structured programming is "less powerful" than unrestricted goto, but in return, it's much easier to follow and reason about a program's control flow.
At the risk of simplifying things too much, I think some other things you said make for an interesting way to sum this up - Rust does for "ownership flow"/"dataflow" what structured programming did for control flow.
I really like this analogy. In a sense, C restricts what you can do compared to programming directly in assembly. Like, there's a lot of programs you can write in assembly that you can't write in the same way in C. But those restrictions also constrain all the other code in your program. And that's a wonderful thing, because it makes it much easier to make large, complex programs.
The restrictions seem a bit silly to list out because we take them for granted so much. But its things like:
- When a function is called, execution starts at the top of the function's body.
- Outside of unions, variables can't change their type halfway through a program.
- Whenever a function is called, the parameters are always passed using the system calling convention.
- Functions return to the line right after their call site.
Rust takes this a little bit further, adding more restrictions. Things like "if you have a mutable reference to to a variable, there are no immutable references to that variable."
I think it depends on the patterns in place and the actual complexity of the problems in practice. Most of my personal experience in Rust has been a few web services (really love Axum) and it hasn't been significantly worse than C# or JS/TS in my experience. That said, I'll often escape hatch with clone over dealing with (a)rc, just to keep my sanity. I can't say I'm the most eloquent with Rust as I don't have the 3 decades of experience I have with JS or nearly as much with C#.
I will say, that for most of the Rust code that I've read, the vast majority of it has been easy enough to read and understand... more than most other languages/platforms. I've seen some truly horrendous C# and Java projects that don't come close to the simplicity of similar tasks in Rust.
Rust indeed gets harder with codebase size, just like other languages. But claiming it is because of borrow checker is laughable at best. Borrow checker is what keeps it reasonable because it limits the scope of how one memory allocation can affect the rest of your code.
If anything, borrow checker makes writing functions harder but combining them easier.
Only in languages/runtimes without threads, like Javascript.
In Rust, async vs threads is a performance tradeoffs (and it's definitely not always clear who the winner will be), and mostly relevant when you have tasks >> cores. Something like curl would have practically 0 reasons to be async, but of course is still subject to internet latency.
> I understand why Go exists. But I can't think of any reason I'd ever use it.
When you want your project to be able to cross-compile down to a static binary that the end user can simply download and run without any "installation" on any mainstream OS + CPU arch combination
From my M1 Mac I can compile my project for Linux, MacOS, and Windows, for x86 and ARM for each. Then I can make a new Release on GitHub and attach the compiled binaries. Then I can curl the binaries down to my bare Linux x86 server and run them. And I can do all of this natively from the default Go SDK without installing any extra components or system configurations. You don't even need to have Go installed on the recipient server or client system. Don't even need a container system either to run your program anywhere.
You cannot do this with any other language that you listed. Interpreted languages all require a runtime on the recipient system + library installation and management, and C and Rust lack the ability to do native out-of-the-box cross compilation for other OS + CPU arch combinations.
Go has some method to implement enums. I never use enums in my projects so idk how the experience compares to other systems. But I'm not sure I would use that as the sole criteria to judge the language. And you can usually get performance on par with any other garage collected language out of it.
When you actually care about the end user experience of running the program you wrote, you choose Go.
> it was annoying to program (due to a lack of enums)
Typescript also lacks enums. Why wasn't it considered annoying?
I mean, technically it does have an enum keyword that offers what most would consider to be enums, but that keyword behaves exactly the same as what Go offers, which you don't consider to be enums.
It’s trivial to switch based on the type field. And when you do, typescript gives you full type checking for that specific variant. It’s not as efficient at runtime as C, but it’s very clean code.
Go doesn’t have any equivalent to this. Nor does go support tagged unions - which is what I used in C. The most idiomatic approach I could think of in Go was to use interface {} and polymorphism. But that was more verbose (~50% more lines of code) and more error prone. And it’s much harder to read - instead of simply branching based on the operation type, I implemented a virtual method for all my different variants and called it. But that spread my logic all over the place.
If I did it again I’d consider just making a struct in go with the superset of all the fields across all my variants. Still ugly, but maybe it would be better than dynamic dispatch? I dunno.
I wish I still had the go code I wrote. The C, rust, swift and typescript variants are kicking around on my github somewhere. If you want a poke at the code, I can find them when I’m at my desk.
That wouldn't explain C, then, which does not have sum types either.
All three languages do have enums (as it is normally defined), though. Go is only the odd one out by using a different keyword. As these programs were told to be written as carbon copies of each other, not to the idioms of each language, it is likely the author didn't take time to understand what features are available. No enum keyword was assumed to mean it doesn't exist at all, I guess.
C has numeric enums and tagged unions, which are sum types without any compile time safety. That’s idiomatic C.
Go doesn’t have any equivalent. How do you do stuff like this in Go, at all?
I’ve been programming for 30+ years. Long enough to know direct translations between languages are rarely beautiful. But I’m not an expert in Go. Maybe there’s some tricks I’m missing?
Here’s the problem, if you want to have a stab at it. The code in question defines a text editing operation as a list of editing components: Insert, Delete and Skip. When applying an editing operation, we start at the start of the document. Skip moves the cursor forward by some specified length. Insert inserts at the current position and delete deletes some number of characters at the position.
Eg:
enum OpComponent {
Skip(int),
Insert(String),
Delete(int),
}
type Op = List<OpComponent>
Then there’s a whole bunch of functions with use operations - eg to apply them to a document, to compose them together and to do operational transform.
C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
> How would you model this in Go?
I'm committing the same earlier sin by trying to model it from the solution instead of the problem, so the actual best approach might be totally different, but at least in staying somewhat true to your code:
type OpComponent interface { op() }
type Op = []OpComponent
type Skip struct { Value int }
func (s Skip) op() {}
type Insert struct { Value string }
func (i Insert) op() {}
type Delete struct { Value int }
func (d Delete) op() {}
op := Op{
Skip{Value: 5},
Insert{Value: "hello"},
Delete{Value: 3},
}
> C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
This feels like a distinction without a real difference. Hand-rolled tagged unions are how lots of problems are approached in real, professional C. And I think they're the right tool here.
> the actual best approach might be totally different, but at least in staying somewhat true to your code: (...)
Thanks for having a stab at it. This is more or less what I ended up with in Go. As I said, I ended up needing about 50% more lines to accomplish the same thing in Go using this approach compared to the equivalent Typescript, rust and swift.
I wish I'd kept my Go implementation. I never uploaded it to github because I was unhappy with it, and I accidentally lost it somewhere along the way.
> the actual best approach might be totally different
Maybe. But honestly I doubt it. I think I accidentally chose a problem which happens to be an ideal use case for sum types. You'd probably need a different problem to show Go or C# in their best light.
But ... sum types are really amazing. Once you start using them, everything feels like a sum type. Programming without them feels like programming with one of your hands tied behind your back.
> As I said, I ended up needing about 50% more lines to accomplish the same thing in Go
I'd be using Perl if that bothered me. But there is folly in trying to model from a solution instead of the problem. For example, maybe all you needed was:
type OpType int
const (
OpTypeSkip OpType = iota
OpTypeInsert
OpTypeDelete
)
type OpComponent struct {
Type OpType
Int int
Str string
}
Or something else entirely. Without fully understanding the exact problem, it is hard to say what the right direction is, even where the direction you chose in other language is the right one for that language. What is certain is that you don't want to write code in language X as if it were language Y. That doesn't work in programming languages, just as it does not work in natural languages. Every language has their own rules and idioms that don't transfer to another. A new language means you realistically have to restart finding the solution from scratch.
> You'd probably need a different problem to show Go or C# in their best light.
That said, my profession sees me involved in working on a set of libraries in various languages, including Go and Typescript, that appear to be an awful lot like your example. And I can say from that experience that the Go version is much more pleasant to work on. It just works.
I'll agree with you all day every day that the Typescript version's types are much more desirable to read. It absolutely does a better job at modelling the domain. No question about it. But you only need to read it once to understand the model. When you have to fight everything else beyond that continually it is of little consolation how beautiful the type definitions are.
You're right, though, it all depends on what you find most important. No two programmers are ever going to ever agree on what to prioritize. You want short code, whereas I don't care. Likewise, you probably don't care about the things I care about. Different opinions is the spice of life, I suppose!
Yes I think I mentioned in another comment that that would be another way to code it up. It’s ugly in a different way to the interface approach. I haven’t written enough go to know which is the least bad.
What are you “fighting all day” in typescript? That’s not my experience with TS at all.
What are the virtues of go, that you’re so enamoured by? If we give up beauty and type safety, what do you get in trade?
I don't become enamoured by language. I really don't care if I have to zig or zag. I'll happily work in every language under the sun. It is no more interesting than trying to determine if Milwaukee or Mikita make a better drill. Who cares? Maybe you have to press a different button, but they both do the same thing in the end. As far as I'm concerned, It's all just 1s and 0s at the end of the day.
However, I have found the Go variant of said project to be more pleasant because, as before, it just works. The full functionality of those libraries is fairly complex and it has had effectively no bugs. The Typescript version on the other hand... I am disenchanted by software that fails.
Yeah, you can blame the people who have worked on it. Absolutely. A perfect programmer can program bug-free code in every language. But for all the hand-wringing about how complex types are supposed to magically save you from making mistakes that keeps getting trumped around here, I shared it as a fun anecdote to the opposite — that, under real-world conditions where you are likely to encounter programers that aren't perfect, Go actually excelled in a space that seems to reflect your example.
But maybe it's not the greatest example to extol the virtues of a language. I don't know, but I am not going to start caring about one language over another anyway. I'm far more interested in producing great software. Which brand of drill was used to build that software matters not one bit to me. But to each their own. Different opinions is the spice of life, I suppose!
There's a lot of ecosystem behind it that makes sense for moving off of Node.js for specific workloads, but isn't as easily done in Rust.
So it works for those types of employers and employees who need more performance than Node.js, but can't use C for practical reasons, or can't use Rust because specific libraries don't exist as readily supported by comparison.
Rue author here, yeah I'm not the hugest fan of "low level vs high level" framing myself, because there are multiple valid ways of interpreting it. As you yourself demonstrate!
As some of the larger design decisions come into place, I'll find a better way of describing it. Mostly, I am not really trying to compete with C/C++/Rust on speed, but I'm not going to add a GC either. So I'm somewhere in there.
How very so humble of you to not mention being one of the primary authors behind TRPL book. Steve you're a gem to the world of computing. Always considered you the J. Kenji of the Rust world.
Seems like a great project let's see where it goes!
> Mostly, I am not really trying to compete with C/C++/Rust on speed, but I'm not going to add a GC either. So I'm somewhere in there.
Out of curiosity, how would you compare the goals of Rue with something like D[0] or one of the ML-based languages such as OCaml[1]?
EDIT:
This is a genuine language design question regarding an imperative/OOP or declarative/FP focus and is relevant to understanding the memory management philosophy expressed[2]:
No garbage collector, no manual memory management. A work
in progress, though.
Closer to an OCaml than a D, in terms of what I see as an influence. But it's likely to be more imperative/FP than OOP/declarative, even though I know those axes are usually considered to be the way you put them than the way I put them.
> But it's likely to be more imperative/FP than OOP/declarative, even though I know those axes are usually considered to be the way you put them than the way I put them.
Fascinating.
I look forward to seeing where you go with Rue over time.
I don't think you'd want to write an operating system in Rue. I may not include an "unsafe" concept, and will probably require a runtime. So that's some areas where Rust will make more sense.
As for Go... I dunno. Go has a strong vision around concurrency, and I just don't have one yet. We'll see.
FWIW, I really like the way C# has approached this need... most usage is exposed via attribute declaration/declaration DllImport for P/Invoke. Contrasted with say JNI in Java or even the Go syntax. The only thing that might be a significant improvement would be an array/vector of lookup names for the library on the system given how specific versions are often tagged in Linux vs Windows.
> because there are multiple valid ways of interpreting i
There are quantitative ways of describing it, at least on a relative level. "High abstraction" means that interfaces have more possible valid implementations (whether or not the constraints are formally described in the language, or informally in the documentation) than "low abstraction": https://news.ycombinator.com/item?id=46354267
Do you think you'll explore some of the same problem spaces as Rust? Lifetimes and async are both big pain points of Rust for me, so it'd be interesting to see a fresh approach to these problems.
I couldn't see how long-running memory is handled, is it handled similar to Rust?
Simplified as in easier to use, or simplified as in less language features? I'm all for the former, while the latter is also worth considering (but hard to get right, as all the people who consider Go a "primitive" language show)...
Since that seems to be the (frankly bs) slogan that almost entirely makes up the languages lading page, I expect it's really going to hurt the language and/or make it all about useless posturing.
That said, I'm an embedded dev, so the "level" idea is very tangible. And Rust is also very exciting for that reason and Rue might be as well. I should have a look, though it might not be on the way to be targeting bare metal soon. :)
One, objective definition is simply that everything that is not an assembly is a high-level language - but that is quite a useless def. The other is about how "deeply" you can control the execution, e.g. you have direct control of when and what gets allocated, or some control over vectorization, etc.
Here Rust is obviously as low-level as C, if not more so (both have total control over allocations, but still leaves calling conventions and such up to the compiler), while go is significantly higher (the same level as C#, slightly lower than Java - managed language with a GC and value types).
The other often mistaken spectrum is expressivity, which is not directly related to low/high levelness. E.g. both Rust and Scala are very expressive languages, but one is low, the other is high level. C and Go both have low expressivity, and one is low the other is high level.
Agree with Go being basically C with string support and garbage collection. Which makes it a good language. I think rust feels more like a c++ replacement. Especially syntactically. But each person will say something different. If people can create new languages and there's a need then they will. Not to say it's a good or bad thing but eventually it would be good to level up properly. Maybe AI does that.
All are high level as long as they don't expose CPU capabilities, even ISO C is high level, unless we count in language extensions that are compiler specific, and any language can have compiler extensions.
C pointers are nothing special, plenty of languages expose pointers, even classical BASIC with PEEK and POKE.
The line is blurred, and doesn't help that some folks help spread the urban myth C is special somehow, only because they never bother with either the history of programming language, and specially the history of systems programming outside Bell Labs.
They're nothing special, but were designed for a particular CPU and expose the details of that CPU. And since we were talking about C specifically, not a bunch of other random languages that may have did similar things...
While most modern CPUs are designed for C and thus share in the same details, if your CPU is of a different design, you have to emulate the behaviour. Which works perfectly fine — but the question remains outstanding: Where does the practical line get drawn? Is 6502 assembler actually a high-level language too? After all, you too can treat it as an abstract machine and emulate its function on any other CPU just the same as you do with C pointers.
I think it is precisely why Rust is gold - you can pick the abstraction level you work at. I used it a lot when simulating quantum physics - on one hand, needed to implement low-level numerical operations with custom data structures (to squeeze as much performance as possible), on the other - be able to write and debug it easily.
It is similar to PyTorch (which I also like), where you can add two tensors by hand, or have your whole network as a single nn.Module.
> C was designed as a high level language and stayed so for decades
C was designed as a "high level language" relative to the assembly languages available at the time and effectively became a portable version of same in short order. This is quite different to other "high level languages" at the time, such as FORTRAN, COBOL, LISP, etc.
When C was invented, K&R C, it was hardly lower level than other systems programming languages that predated it, since JOVIAL in 1958.
It didn't not even had compiler intrisics, a concept introduced by ESPOL in 1961, allowing to program Burroughs systems without using an external Assembler.
K&R C was high level enough that many of the CPU features people think about nowadays when using compiler extensions, as they are not present in the ISO C standard, had to be written as external Assembly code, the support for inline Assembly came later.
I think we are largely saying the same thing, as described in the introduction of the K&R C book:
C is a relatively "low level" language. This
characterization is not pejorative; it simply means that C
deals with the same sort of objects that most computers do,
namely characters, numbers, and addresses.[0]