What do you do when incrementing (or adding to) an integer in a loop? You’d need sophisticated analysis to determine that the overall sum will always fit into a specific type, or otherwise you’d end up with bigints all the time.
You wouldn't modify an integer in place. At least it wouldn't be idiomatic. Quite a few languages, especially functional, have that constraint. You can always do range-based-for:
for i in [0, 100]:
// i is known to be in [0, 100]
// the argument is known to be in [0, 200]
do_something(i*2)
If you really want unbounded growth, you need a bignum. If you want to be bounded in size but not in time, you have to specify overflow behavior. Something like (ugly pseudocode):
integer[modulo 256] a = 0;
while some_condition:
a += 1
or
integer[0, 1000] b = 0;
while some_condition:
guard if b < 999:
// b is now in [0, 999]
b += 1
The whole point is, forcing you to make your mind up about overflow behavior (and not just using int32 or int all the time and hoping i is going to be "small").
Isn't uint8 exactly what your 'integer[modulo 256]' is? And for unbounded you do need bignum and dynamic allocation so I'm not sure I see any benefits to explicitely fine-graining the range instead of using machine word at all times and bignums when needed
> Isn't uint8 exactly what your 'integer[modulo 256]' is
In c/c++ it is. Obviously some other languages would disagree.
I think a better example of what GP is thinking about is Rust's approach, where overflowing an u8 panics (in debug builds), but you can do x.wrapping_add(y), x.saturating_add(y), x.checked_add(y) etc., depending on what you want to happen when the operation overflows.
Yes. My point is, you’d need that ranged-based static analysis, and probably (judging from how interval arithmetic usually turns out) you’d need bigints much more frequently than they are currently used.
If you're expecting arithmetic to work not modularly, you do need to either verify that there can never be an overflow or use bignums. Otherwise you have a bug.