Im curious what exactly you ask here. I consider myself to be a decent engineer (for practical purposes) but without a CS degree, and I might likely have not passed that question.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).
He thinks it makes him look clever, or more likely subtlety wants people to think "wow, this guy thinks something is obvious when Matt Godbolt found it surprising".
This kind of question is entirely useless in an interview. It's just a random bit of trivia that either a potential hire happen to have come across, or happens to remember from math class.
Have you considered that maybe Matt isn’t all that surprised by this optimization, but he is excited about how cool it is, and he wants readers of all backgrounds to also be excited about how cool it is, and is just feigning surprise so that he can share a sense of excitement with his audience?
I guess what's surprising here is that compilers are able to perform those optimizations systematically on arbitrary code, not the optimizations themselves, which should be obvious to a human.
Whether they get the question exactly right and can pinpoint the specific compiler passes or algebraic properties responsible for reductions like this is totally irrelevant and not what you’re actually looking for or asking about. It’s a very good jumping point for a conversation about optimization and testing whether they’re the type of developer who has ever looked at the assembly produced in their hotpath or not.
Anyone who dumbly suggests that loops in source code will always result in loops in assembly doesn’t have a clue. Anyone who throws their hands up and says, “I have no idea, but I wonder if there’s some loop invariant or algebraic trick that can be used to optimize this, let’s think about it out loud for a bit” has taken a compiler class and gets full marks. Anyone who says, “I dunno, let’s see what godbolt does and look through the llvm-opt pane” gets an explicit, “hire this one” in the feedback to the hiring manager.
It’s less about what they know and more about if they can find out.
So in other words, it isn't "basic and essential optimizations" that you would expect even a junior engineer to know (as your comment implies), but a mechanism to trigger a conversation to see how they think about problems. In fact, it sounds like something you wouldn't expect them to know.
I didn’t write the GP comment. I wouldn’t call this basic and essential, but I would say that compilers have been doing similar loop simplifications for quite some time. I’d expect any mid to senior developer with C/C++ on their resume to at least consider the possibility that the compiler can entirely optimize away a loop.
> In fact, it sounds like something you wouldn't expect them to know.
I’d go a step further, I don’t think anyone, no matter how experienced they are, can confidently claim that optimized assembly will or won’t be produced for a given loop. That’s why the best answer above is, “I dunno”. If performance really matters, you have to investigate and confirm that you’re getting good code. You can have an intuition for what you think might happen, and that’s a useful skill to have on its own, but it’s totally useless if you don’t also know how to confirm your suspicions.
My question is in the context of doing those optimizations yourself, understanding what can be done to make the code more efficient and how to code it up, not the compiler engineering to make that happen.
Yikes, gross. That’s like an option of last resort IMO. I’d rather maintain the clean loop-based code unless I had evidence that the compiler was doing the wrong thing and it was in my critical path.
The compiler is only able to perform certain optimizations that have no observable behaviour.
For example it can only parallelize code which is inherently parallelizable to begin with, and unless you design your algorithm with that in mind, it's unlikely to be.
My belief is that it's better to be explicit, be it with low-level or high-level abstractions.
My interview aims to assess whether the candidate understands that the dependency of each iteration on the previous one prevents effective utilization of a superscalar processor, knows the ways to overcome that, and whether the compiler is able to optimize that automatically, and if so when it absolutely cannot and why.
I generally focus more on sum of arbitrary data, but I used to also ask about a formulaic sum (linear to constant time) as an example of something a compiler is unlikely to do.
My thinking is that I expect good engineers to be able to do those optimizations themselves rather than rely on compilers.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).