A smart solution could be to have arithmetic operators automatically coerce the types to the type of their result, if the result type is already known.
Indeed, that is my intuition as well. This is the safest default behavior from the compiler, and even solves crazy expressions like:
u64 = ((u8 + u16) * u32) / u8;
It's hard to reason about what a programmer would want that statement to do. Coercing everything to u64 is the safest option. The idea, though, is to allow the programmer to use explicit casts to define exactly what they want when the need arises. So:
u64 = (((u8 + u16) as u16) * u32) / u8;
Would mean u16 addition, u64 multiplication and u64 division. So you get the benefit of safe, implicit type widening without losing the ability to micro-optimize when you want to.
True, type inference would make an ideal solution difficult. But I'm totally fine with the compiler failing to apply implicit casting when type inference is involved. This code is just as readable, if not better:
I can sympathesize, but the last think I would want in a language is for benign looking refactorings to changes meaning. E.g.
let a: u16;
let b: u16;
fn f(x: u32) -> ...
f(a*b) // 32 bit result
let x = a*b; // x is u16
f(x)
Now there is a sane answer to this: define multiplication to always result in larger integer types, and require some explicit downcasting. But I'm not sure anyone will go for this.