All Arnoldi methods basically repeatedly grow and shrink a search space until they contain good approximations to Schur vectors. Shrinking (=restart) is necessary to keep constant memory requirements, as well as performance.
Restart is somewhat non-trivial because it has to preserve the Arnoldi relation. ARPACK uses Implicit Restart, which is a very beautiful trick based on "perfect shifts": a single iteration of the shifted QR algorithm where the shift is taken as an exact eigenvalue is a way to reorder basis vectors (moving vectors you want to remove to the back, so you can truncate them) while preserving the Arnoldi relation. However, perfect shifts in the QR algorithm are good on paper but can lead to arbitrary loss of precision in finite arithmetic, especially with nearly repeated eigenvalues.
I think newer implementations (like SLEPc) are often based on Stewart's Krylov-Schur paper. It's not as pretty but certainly more stable.
In short: give ARPACK a matrix with a cluster of almost repeated eigenvalues at the boundary of the convex hull, and it tends to fail.