This reminds me of the bad old days of Rails, when every man and his dog had some random hacky and myopic solution to keep Rails and Mongrel running. Just say no to Node/Rails/whatever specific-stuff, kids--use system level tools for running processes.
Edit: removed references to "proprietary", as indexzero is correct about meaning of that.
That's a strange usage for "proprietary" since it's all free and open source (MIT). Either way, the nodejs child_process module is really just a light weight wrapper to execvp():
I just have to wonder - after listing all those projects which already do this job and do it well... Why create / use `forever` which didn't get the same exposure to the production environments yet?
Any examples of when this would be desirable? Seems to me that monitoring and running tools should just, you know, monitor and run things. Separation of concerns.
I'd much rather have access to a well structured library (node.js) where sockets and http are first class citizens, opposed to being forced into a solution or having to write C or bash.
@intranation Just to toss out a scenario: Lets say under high traffic your application hits an edge case and starts to crash very frequently. Suppose you want to receive an email, SMS, or IM when something like this happens as a devops person.
Would you consider that a valid concern of process monitoring?
Planned features to Forever include this from the command line, but if you use it from node directly one could implement that feature now:
Bit by bit... monit restarts the server a defined number of times per second. Then it stops if the service fails to start. If the service is not running, nagios dispatches the snmp traps, sms-es, ims and all the rest.
I want my monitoring process to be rock solid. Extensions such as notifications should be done in another process, otherwise you're at the mercy of whatever email/SMS/IM library you include to do the notification.
This leads to a very bad practice though, which is to cause bugs to be left laying around in production systems. You should really form the habit of analyzing such crashes, getting to the root cause of the problem and making sure that it never ever happens again.
It's good to have a system like this in place, but it should be used as an insurance policy against unknown bugs, not as a way to work around known and reproducible ones.
Obviously that's fine for your use-case where the querySelector function is available, but it wouldn't work e.g. on Internet Explorer. My version is also more robust when dealing with markup errors, i.e. when there's more than one viewport meta element.
When I see a blog post like this, it serves more to raise doubts about Node.js than anything else.
Before today, I had been considering Node for an upcoming product. Now I read that people are actually building and releasing software to work around the fact that Node.js servers regularly incapacitate themselves. Really? Like it just goes down and doesn't know how to cycle itself? Ouch. That translates to me as "Don't go anywhere near Node.js until they get it stable."
So yeah, I'm sure this is a great way to shore up your system. Buy I'm going to think twice before investing time in a system that needs shoring up.
More of a safety mechanism than anything else. I use daemontools to run qmail, and qmail has never crashed. But if it did, I would still want to receive my email, even though it's "horrible" that qmail could crash.
Sometimes the safest way to handle an error is to kill the process and start a new one. Starting recovery from a known-good state is better than starting from a known-bad one.
Seems reasonable enough. From the tone of the post, it sounded like crashing was a fairly common thing to happen with Node.js, and that it didn't cycle itself.
Coming from the context of IIS/ASP.NET, which hasn't yet crashed on me (at least not without cycling itself harmlessly) in the 10 years I've been running sites on it, the possibility that you'd need to worry about such things seemed a bit novel.
So basically what you're saying is that it just follows the Unix philosophy and doesn't run its own daemon to cycle it if it falls down. That doesn't sound anywhere near as unreasonable.
Yeah, exactly. I have not looked at the app described in the article, but daemontools is a set of programs that let you manage persistent processes. "supervise <directory>" will run a script called "run" in <directory>, restarting it if it fails. It also keeps status information around, so you can run "svstat <directory>" to check to status, "svc -d <directory>" to kill it, etc.
On top of that is svscan, which will look for service directories in a directory, and will start a supervise instance for each. It will also start a supervise instance for <directory>/log if it exists, piping the output of the supervised script to the logging process.
In this way, you just need to write a service that writes log messages to stdout, and it will handle log rotation and process management. It's really a nice system, although apparently rather unpopular. I guess it's more fun to write your own logging and daemonization code, rather than let a very small C program that has existed unchanged for 10 years do it. Or something.
Erlang has heart (http://www.erlang.org/doc/man/heart.html) for similar reasons. Of course, it also has distribution, for when the whole computer it's running on dies.
I use runit (http://smarden.org/runit/) instead of daemontools (same sort of thing, but a bit less opinionated about e.g. where it gets installed). Rather than expecting every daemon to implement its own supervisor, logging system, etc. correctly, just use one of those.
Does daemontools care anymore? I use "supervise ~/.dotfiles/service | readproctitle ............." to start my user-local services when I log in. Works like a charm.
It's been a while since I installed daemontools. IIRC it created a bunch of root directories (e.g. /commands). There was also an OpenBSD port for runit already.
Criticizing node.js for not knowing how to restart itself is like criticizing Python for not knowing how to restart itself. node.js is an interpreter, it doesn't implement every part of a production stack.
I don't use Forever, but I use upstart with respawn which is similar, and I use it to monitor not just node.js daemons but also ones written in python, bash, and whatever else. Monitoring software is a reasonable part of a stable production system; it indicates a healthy ecosystem rather than a deficient one.
Oh, I just meant last I checked on blogs and such (not with him personally). IIRC, his main reason for advising against it was that there were too many security holes. This was in a video talk from several months ago if not longer. I'm curious to know if conditions have changed.
daemontools is a beautiful thing, personally I think its a much better idea to have a general purpose tool for this kind of thing. I'm not sure why everyone feels the need to reinvent the wheel when it comes to managing background services.
I like how there's a command line interface plus an application interface, so I can tie forever process management into my web interface and backend control interface. Right now the processes are just sitting in a `while true` bash loop.
Just deployed this now. Took all of roughly 10 seconds to setup and working beautifully. I've used supervisor in the past, and this was definitely easier.
The application described in the article is specifically written to interface with Node.js servers. It keeps those processes running, and restarts them as needed. It is not a general process monitor.
Edit: removed references to "proprietary", as indexzero is correct about meaning of that.