> Self-confidence as a programmer is when starting a new project, storing the transaction ID as a long rather than an int...
uint64_t even
Or a UUID as others have suggested.
Technically C spec doesn't really say exactly how many bits int, long and long long should be. If you want specific sizes and your code to be somewhat portable use the specific bit sizes to make that clear. There are also types for size-like things (size_t) and pointer and offset like things.
There's a usecase for lower-bounded types such as int_least32_t, where the compiler may choose a larger type if it offers better performance. However, if you're using that, the test suite should run all relevant tests for multiple actual sizes of that particular type (through strategic use of #define, for example).
> There's a usecase for lower-bounded types such as int_least32_t, where the compiler may choose a larger type if it offers better performance.
If you're looking for the best performances you shouldn't use leastX types, you should use fastX types (e.g. int_fast32_t for the "fastest integer type available in the implementation, that has at least 32 bits").
The difference between "leastX" and "fastX" is that "leastX" is the smallest type in the implementation which has at least X bits. So if the implementation has 16, 32 and 64b ints and is a 32b architecture, least8 would give you a 16b int but fast8 might give you a 32b one.
The reason is that anything else is using default types and you lose your safety battle in each:
int32_t x = call_returning_int();
line. Otherwise, you assert/recover on each me-they borderline. C is a language where you have absolutely no guarantee that int or constants defined as int will fit into anything beyond int, long or long long, and then there is UB patiently waiting for your mistake. The method of handling that is to never change or fix types unless you have to, and then be careful with that.
What he means by that is the old definition of auto, which C++11 deprecated in favour of making it do type deduction instead.
auto foo = func_returning_int(); to my knowledge worked in C because 'auto' was the lifetime keyword - like 'register' - and the default type in C is 'int'.
That's why when you miss a definition in C++ the compiler warns you that there's no default int.
Your code is actually less portable if you use types like uint64_t -- if your system doesn't have exactly that type implemented the typedef won't exist. If all you need is a really big number 'unsigned long long' is required to exist and be able to store 0..2^64-1
uint64_t even
Or a UUID as others have suggested.
Technically C spec doesn't really say exactly how many bits int, long and long long should be. If you want specific sizes and your code to be somewhat portable use the specific bit sizes to make that clear. There are also types for size-like things (size_t) and pointer and offset like things.