Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good question! Nobody was writing 1900 + (year % 100), right?


For example, some systems stored the year in a byte, and when printing out a report it printed "19" and that byte - so year 1999 would be followed by year 19100.

Some systems, where storing numbers in columns of characters were common practice (COBOL idiomatic style?) stored the date as two digits (possibly BCD), so the possible range is 00-99 no matter how many bits are used.


Some people were.

But it's worse than that. In the 90s a lot of code used 16-bit values, character strings. That is, it stored a char(2), parsed it as 2-digit number and then converted it to a date by adding 1900.

So it was only really "saving space" when compared with storing a char(4).


But if they wanted to save space why not store a 8 bit number? I imagine it must have something to do with punch card compat or some binary coded decimal nonsense. Still seems inefficient.


If a system gives you two options for storing a date (using 2-digit or 4-digit years), how many dates do you need to store and use in calculations before you end up saving space by creating a new data type and all of the supporting operations to make the storage of the date itself more efficient? In recent years, it's more common to make this type of decision because something else is causing an issue, otherwise we rarely consider the space required for a date (and many languages no longer have a separate type for dates).


Maybe because they didn't know better? I was going to say "maybe they were bad programmers", but likely just "average" programmers.

No punch cards or BCD, I'm talking about DOS/Windows systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: