Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, you could, but chances are that the provable upper bounds on memory usage or execution time are orders of magnitude above what you think it should take. Anything that produces a new value where you cannot prove that another value becomes unreachable (in which case the compiler could translate it to a destructive update) could trigger a garbage collection that might write half a GB of memory and takes .1 of a second (numbers may be realistic, but if they are, it's pure luck)


Sure.. a garbage collector would mess you up, but garbage collection isn't an intrinsic property of stateless languages.

EDIT: Seems I'm wrong http://www.haskell.org/haskellwiki/GHC/Memory_Management


It is not intrinsic, but hard to avoid. Alternatives include:

- just allocate, never collect (not infeasible with 64-bit memory spaces, if you have lots of swap and can rebozo fairly frequently, but bad for cache locality)

- garbage collect at provable idle times. Question is: when are those?

- concurrent garbage collect, and proof that it can keep up with allocations

Finally, you could try and design a language where one can (often) prove that bar can be mutated in place in expressions such as

    bar = foo(bar,baz)
(That's possible if you can prove there's only one reference to bar at the time of the call)

(Rust's memory model may help here)

I am not aware of any claims that it is possible to write meaningful systems based on this model that do not have to allocate new objects regularly. Problem is that, to guarantee the 'one reference' property, you have to make fresh copies of objects all the time, and that beats the reason why you want that 'one reference' rule.


Thank you for the explanation. That's a lot to think about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: