When bit shifting, coerce to the result type first, then shift.
var foo: u32 = 0;
var bar: u8 = 1;
foo = bar << 8;
The code above gives an error saying: error: integer value 8 cannot be coerced to type 'u3'
It seems like it is checking the RHS of bit shift with the type of bar instead of foo.
var foo: u32 = 0;
var bar: u32 = 1; // Changed to u32
foo = bar << 31; // Now I can bit shift a lot farther
var foo: u32 = 0;
var bar: u8 = 1;
foo = bar << 7; // Only shift 7 places
Both examples above work. If there's a good reason for this behaviour I'm happy to know, but this seems like it should work fine.
I think one reason would be when left-shifting signed types:
One might expect left-shifting an i3 with 0b010 = 2 by one bit to return @as(i3, 0b100) = -4. If the shifted type had more bits, it would instead result in positive 4.
And making the type of the expression lhs << rhs depend not on lhs but on the result type / context it is used in sounds like an easy source for errors, especially in more complex expressions, function calls etc. .
For unsigned types it would be less surprising (though the consistency of unsigned and signed types behaving the same way is too good to give up), but one could still expect to discard bits by shifting them out instead of extending to a bigger type. (Though for that reason it's currently safety-checked UB iirc.)
I think it's too confusing for an operator to consider context before applying itself.
Because as I understand it what your saying effectively amounts to this:
var bar: u8 = 1;
var foo: u32 = bar << 31;
\\ This expression ^ considers the context its being applied in, widens it's operands, then applies?
This can't even be done consistently, how should this work?
const bar: u8 = 1;
const foo1: u32 = bar << 31;
// Apparently supposed to widen bar to a u32?
const foo2: u32 = ((( ... ( bar << 31 ) ... )));
// Here the compiler needs to ^ propagate context through all the brackets?
const foo3: u32 = ((( ... (bar << 30) ... ))) + ((( ... (bar << 30) ... )));
// Here the compiler needs to propagate the context down either side?
const foo4 = bar << 31;
// Here there is no context, so it creates a weird inconsistency that this is invalid but all the others are valid?
fn identity(x: anytype) @TypeOf(x) { return x }
fn identity2(x: u32) u32 { return x }
const foo5 = identity(bar << 31); // How should this work? We can't propagate the context through this. So it has to be invalid.
const foo6 = identity2(bar << 31); // But this should work?
Really the only sensible thing to do in my opinion, is for expressions to be expressions which have well founded types in and of themselves and do not depend upon the context in which they are evaluated.
As in: someu8 << somevalue is, with no bearing on context, always a u8. So the right hand side must be a u3 because you can't shift by more bits then there are in the value (undefined behaviour).
An exception I can see in which maybe such context-dependent-behaviour be useful is this:
var x: u32 = ...;
var y: u16 = ...;
var z: u48 = x * y; // This expression has size u(16+32) = u48, free of context
var y = x*y // @TypeOf(y) == u48.
And then have behaviour such that
var: u32 = x*y; // This is a u48, but it considers the context in which you chose not to widen
// and emits overflow checked cast behaviour instead of a compile error
// complaining of a lack of an explicit narrowing cast (such as a truncate.)
// Basically you get.
// V always u48.
var z2: u32 = @intCast(T, x*y);
// Context dependent ^ type, but defaults to u48 without context.
But even this seems kind of weird and seems to fall prey to the same problems I mentioned before.
Honestly its way easier to memorize widening conventions if they can't do anything fancy and always apply at the end.
foo: u32 = <expression>;
// Exactly equivalent to
foo: u32 = @as(u32, <expression>);