Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The reason why it is not "junk" is that it will teach him a bunch of important things: that you can "talk" to a computer using different languages, that those languages have different capabilities, different properties and different grammars (syntax), and that languages are all about grammar and semantics.

This would be a good point if the year where 2003. Compiling code and setting up a proper coding environment is a solved problem that isn't worth the headspace of a young and aspiring programmer. In the modern day, you can open up the console on your browser and start writing and running code just like that.

What's far more important for a beginner is to learn algorithms and data structures, which are timeless concepts that are highly valued and can be applied to any language or environment (and to give OP credit, C++ is a fine language for learning these concepts). Recursion vs iteration, control flow, data structures, time-complexity -- gaining a mastery of these topics will enable the student to write programs that make it worth while to then learn the auxiliary technologies that can make their programs usable and sharable -- unix commands, compilers, linkers, etc -- learning these tools as a means to an end.

> The correct and actually beneficial philosophy is user-tool co-evolution, that is mutual adaptation. And for this to happen, the user has to really understand their tool.

I would argue that a lot of existing technologies such as unix are less so philosophy driven and more so backwards-compatibility driven where things exist purely for historic reasons. The idiosyncrasies of say unix can be a headache for a beginner, and it'd be a better use of their time to only interact with those technologies when absolutely necessary.



> This would be a good point if the year where 2003. Compiling code and setting up a proper coding environment is a solved problem that isn't worth the headspace of a young and aspiring programmer. In the modern day, you can open up the console on your browser and start writing and running code just like that.

"Modernity" has nothing to do with it. In the old days you could switch on you family computer and start entering Basic just like that too. The only difference between yesterday and now, is that the leverage is much stronger.

Adding and multiplying matrix has been a solved problem too for an even longer time, yet we teach students how to do it manually. It's a solved problem, yet we expect them to understand the solution and be able to execute it.

I don't know why it shouldn't be the case also in CS, unless what you want is code monkeys [1]

> Recursion vs iteration

You cannot explain that properly without explaining first that compilers to stack machines, routine calls and jumps. If you skip that you just end up saying "anyway use iteration when your program mysteriously blows up". Actually if you do go all of a sudden low-level like this, your students will discard your explanation and remember the previous rule of thumb. That's one of the reasons why some people bad mouth us, saying we abuse the terms "science" and "engineering" in "computer science" and "software engineering".

CS 101 should always be in assembly language on some 8bits micro-controller. If you don't do that, you are cutting corners.

[1] https://www.joelonsoftware.com/2005/12/29/the-perils-of-java...


>You cannot explain that properly without explaining first that compilers to stack machines, routine calls and jumps. If you skip that you just end up saying "anyway use iteration when your program mysteriously blows up"

Isn't this a implementation detail? I mean it depends if your course is focused on teaching programming first level (how to get some simple problem then math and code it with if, for and while and some variables) or if you actually have a course about say c++ where you get in detail about why this old languages are designed how they are.

Just to make things clear, I don't like "magic", I too prefer to understand how the computer works from the logic gates up though I am still missing some stuff in my knowledge. But what I noticed is the introduction to programming is very hard, I remember me wasting hours explaining stuff to a student colleague and after those many hours he had the revelation " so this computer is dumb, we have to tell it to do everything step by step , all details..." , so you have the problem of teaching the students algorithms where for myu son they use some C-like pseudo code, then you want them to start running those algorithms ASAP so they can get feedback from the machine , but the syntax is a big issue. It would not be efficient to teach them to also use the command line(Windows and Linux), run the compiler, troubleshoot if the command fails to run(maybe wrong directory or paths), then do the parse the syntax errors and try to find the actual error that produces that sometimes unrelated message. With and IDE you might get the red squiggly lines as you type with some hints on what is missing (before you write 10 new lines and not know what is the problematic one.

But I agree with you if the student is not in the first year of learning programming and indeed the people that do computer science they will study compilers and CPu architecture, and data structures and lot of stuff.


> Isn't this a implementation detail? I mean it depends if your course is focused on teaching programming first level (how to get some simple problem then math and code it with if, for and while and some variables) or if you actually have a course about say c++ where you get in detail about why this old languages are designed how they are.

As much as the fact that numbers are stored on a finite number of bits. Your iterative factorial won't blow up for 100! but gives an obviously wrong result, like a negative number.

Don't blame C/C++; blaming the tool is generally a bad idea (now if your school teaches cooking using flamethrowers that's a different story...). Many languages don't make promises about tail call optimization. A lot of languages don't have "big numbers" (as in Lisp) support - even "modern" ones. Very few languages will accept recursive data structure definitions without using pointers (or references if you want, but the label you put on this can of worms doesn't make much difference). Some will actually give the right-ish result because they use floating point numbers internally, but then you might have to explain why the hell 1/10 does not give exactly 0.1 (which is why, in some algorithms, you compare not-so-real numbers using <= instead of ==).

You cannot escape the fact that computers are mechanical machines, not abstract math machines (unless you load a program that does precisely that). Part of "the Art of Programming" is about overcoming that limitation.

Abstractions are lies by definition. Lies are sometimes a necessary evil, but abstraction abuse is a poison.


I’m neutral on whether knowing how to compile from terminal is essential or not, but I disagree with this:

> Compiling code and setting up a proper coding environment is a solved problem that isn't worth the headspace of a young and aspiring programmer. In the modern day, you can open up the console on your browser and start writing and running code just like that.

Noone should get into writing more complex programs without understanding the basics of the tooling, and we at most only have a few auto-generators that will create a skeleton that can and will break with ever so slightly a change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: