Interesting analogy! There's no clear infinite tape equivalent in MemGPT, but you can view the virtual context loosely as a tape. Moving the head could correspond to MemGPT indexing into virtual context - if the data is in-context (inside the LLM window), it's a direct read, but if data isn't there (it may/may not be stored in external context), the read requires a function call to "page in" the data.