Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, you probably shouldn't just throw up your hands and knowingly invoke UB for starters.


Calloc returns NULL on error - the program dies on access. I am not trying to win an argument here. I am genuinely interested in finding a good strategy to deal with memory allocation errors.


15-odd years ago that was still a lingering point of debate in programming, before computers had lots of memory and disk to spare, so I eventually wrote my own memory manager for C++, since do-it-myself is my default approach to everything. At the time I was using a Metrowerks IDE, and surprisingly, my memory manager outperformed theirs by a fair bit (and allowed for some nifty debugging of other common problems, like memory leaks).

I set it up to support multiple heaps in the application, so that the application could have a heap for, say, network support, and a separate heap for something else. The memory manager used a modified best-fit algorithm and generally didn't have problems with fragmentation. If the application ran out of memory in one heap, it wasn't completely crippled -- a malloc error in the buffers heap wouldn't prevent a dialog from being displayed from the UI heap.

It also had a "reserve" heap that could be made available to the application in particularly dire situations, and since the blocks in each heap grew towards the heap's table, it had a function that could analyze and compress the table to squeeze a few more bytes out of the heap.

There was very little overhead and it worked nicely.


I am genuinely interested in finding a good strategy to deal with memory allocation errors

Try allocating less memory and switch to a slower but less memory intensive approach. Try freeing up some memory from other places where it isn't needed right now and deal with the performance hit of reallocating that memory later.

Of course both of these approaches require a fair amount of architectural changes, but neither is unreasonable.


No; the reasonable answer is to terminate and either log the failure or restart the process if it is critical.

Use process isolation to handle recovery from OOM situations if automated recovery is required.

A process that has run out of memory is likely in either of two situations; the process has an unfixed memory leak, or it is working with an input that is too large for the memory resources of the system it is running on, in which case it is likely thrashing. In both situations, the best way out is to terminate; in the former, regular restarts can still keep the system as a whole functional, in the latter, hanging on will just make sure the system keeps swapping.

In situations where it is not to restart, it's better not to have dynamic memory allocation at all. But fault tolerance is generally a better strategy, see e.g. Erlang; systems should be designed so that processes can be restarted.


The architecture cost of this is _huge_; it's only worth doing on reliability-critical systems, where you might consider banning dynamic memory allocation altogether.

I think it's perfectly reasonable for quick tech demo projects to never check anything. fopen() is probably worth a little bit of checking, but not malloc().


You could either try to free memory elsewhere, or die with a nice error message.


Nonsense.

The application running out of memory isn't the responsibility of the application.

If this is a job for anyone, it's a job for the operating system, however malloc() fails infrequently enough that it hasn't yet been worth it.

Or does your minecraft implementation in C pre-allocate memory for its nice error message at startup, resorting to direct I/O operations against video ports if that allocation fails?


I thought it was common ground for C-game-devs to check how much memory the game needs total, allocate it at start and handle it's management manually (for performance reasons).

Wouldn't this also allow for creating a simple out-of-memory error on start?


Nonsense.

Well, it depends. Writing a Minecraft clone? Yeah, you can probably just give up. That doesn't mean you shouldn't be checking for failure. Check, if failed, pop up a message, log something, and die.

However, not all applications are created equal. If I'm writing a (for example) safety critical piece of code then I may not have the luxury of just exiting (I may not be using dynamic memory allocation at all either, but that's neither here nor there.) It may be more beneficial to my users (or absolutely required) to attempt to recover.

Not all software can just exit on a whim, but obviously this is a small portion of the software that exists.


pop up a message, log something

People forget that even printf calls malloc. Logging probably isn't an option unless you are doing something special.


Yeah, that's a good point. It certainly gets hairy real quick.


Another dependency here but I use APR pools. You can keep your allocations to a defined scope then which makes free'ing easy and you can also have some guarantees inside the pool that it will have some bytes to give you. Less error checking and expensive malloc'ing required inside critical sections then.


Undefined behavior doesn't mean "your program will crash".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: