Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A smart solution could be to have arithmetic operators automatically coerce the types to the type of their result, if the result type is already known.


Indeed, that is my intuition as well. This is the safest default behavior from the compiler, and even solves crazy expressions like:

    u64 = ((u8 + u16) * u32) / u8;
It's hard to reason about what a programmer would want that statement to do. Coercing everything to u64 is the safest option. The idea, though, is to allow the programmer to use explicit casts to define exactly what they want when the need arises. So:

    u64 = (((u8 + u16) as u16) * u32) / u8;
Would mean u16 addition, u64 multiplication and u64 division. So you get the benefit of safe, implicit type widening without losing the ability to micro-optimize when you want to.


The annoying thing is that (because of type inference) parts of the expression could be within different expressions:

    fn required_bytes(width: u16, height: u16) -> u64 {
        let size = width * height; // what's the type of size?
        size + 12
    }


True, type inference would make an ideal solution difficult. But I'm totally fine with the compiler failing to apply implicit casting when type inference is involved. This code is just as readable, if not better:

    fn required_bytes(width: u16, height: u16) -> u64 {
        let size: u64 = width * height;
        size + 12
    }
I just really don't want to have to write this all the time:

    fn required_bytes(width: u16, height: u16) -> u64 {
        let size = (width as u64) * (height as u64);
        size + 12
    }


I can sympathesize, but the last think I would want in a language is for benign looking refactorings to changes meaning. E.g.

    let a: u16;
    let b: u16;
    fn f(x: u32) -> ...

    f(a*b) // 32 bit result
    
    let x = a*b; // x is u16
    f(x)
Now there is a sane answer to this: define multiplication to always result in larger integer types, and require some explicit downcasting. But I'm not sure anyone will go for this.


If there is a type inferred, single character operator which can cast one numeric type to another, both sides can be happy.


This way of thinking leads to ASCII spaghetti as features are added over time.


Should be u16 at first (or u32 for overflow?) and promoted to u64 when returning from the function.


If the result type is already known and the result type is wider.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: