At the lowest level this is how it works, so using ring buffers as the abstraction makes tremendous sense. Plus it can be done safely without requiring a context switch on the fast path, with a little care.
Its also easy to naturally batch things. When the load is low the batch size is 1, and then it naturally grows as more work gets queued during processing of the prior work.
Its also easy to naturally batch things. When the load is low the batch size is 1, and then it naturally grows as more work gets queued during processing of the prior work.
I love ring buffers and queues.