CGI and FastCGI are two different things in two different domains. Well the domains are not that different but enough that CGI solves a real problem and makes sense and FastCGI does not. CGI is the interface between a HTTP transaction and a process. It answers the question "How do we turn a HTTP request into executing a process?". FastCGI answers the question of "How do we turn a HTTP request into a FastCGI request". a convolution that leaves you asking "Why are we jumping through this hoop? Is FastCGI actually bringing anything to the table?, Is it actually more difficult to have a HTTP server instead of a FastCGI server if they are so trivially connected?
I am halfway convinced the only reason FastCGI exists is we had got in a mindset that executable code in a HTTP context had to run via the Common Gateway Interface and when we wanted to to change to a persistent process model it had to have the CGI name as well. Well FastCGI to the rescue it does exactly what HTTP does but is not HTTP and most importantly has CGI in the name.
As to the articles complaint, "A HTTP relay server had a bug. Therefore HTTP is intrinsically bad". Well.. it failed to convince me. I am not exactly in that domain(backend web development) so my view is not worth much. But I feel that your internal HTTP(application) servers should be built as if they were going directly on the open web. Then you put some relay servers in front in order to block, balance and route requests. But avoid putting too many smarts in the relay servers. A smart network is almost always a bad idea. try and stick with a dumb network and smart edges.
I am halfway convinced the only reason FastCGI exists is we had got in a mindset that executable code in a HTTP context had to run via the Common Gateway Interface and when we wanted to to change to a persistent process model it had to have the CGI name as well. Well FastCGI to the rescue it does exactly what HTTP does but is not HTTP and most importantly has CGI in the name.
As to the articles complaint, "A HTTP relay server had a bug. Therefore HTTP is intrinsically bad". Well.. it failed to convince me. I am not exactly in that domain(backend web development) so my view is not worth much. But I feel that your internal HTTP(application) servers should be built as if they were going directly on the open web. Then you put some relay servers in front in order to block, balance and route requests. But avoid putting too many smarts in the relay servers. A smart network is almost always a bad idea. try and stick with a dumb network and smart edges.