Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Others have noted that your reply is a non-sequitur. However, I'd like to address it anyway, specifically the "luck" part.

I have written all of what you describe: large-scale multithreaded high-performance distributed applications using Perl.

When necessary I have written perlXS bindings to C routines, but this is surprisingly infrequent. Most high performance system interfaces are exposed to Perl in an efficient manner. I've had Perl processes with tens of gigs memory resident. I have written non-blocking state machines in Perl which serviced (or served) over 100k connections in parallel. Which have run on over 100k machines at once. I have worked on huge modular, multi-team undertakings in Perl. The single caveat is that separate processes and explicitly shared memory is the parallelism tool of choice with Perl.

Luck wasn't a part of any of this. Perl will scale quite well, and if you think it doesn't you likely have a few things to learn about what the language/platform can do.



Just curious: what did you use for IPC at this scale?


I'm describing about a dozen separate projects above. For same-system fast IPC I often used an in-house library (with Perl bindings) which implemented a memory mapped, sharable hash table. Unfortunately it hasn't been released as open source. The closest similar project might be http://fallabs.com/kyotocabinet/ however it's inferior in a few important ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: