multithreading - multi threaded FastCGI App -
i want write fastcgi app should handle multiple simultaneous requests using threads. had @ threaded.c sample comes sdk:
#define thread_count 20 static int counts[thread_count]; static void *doit(void *a) { int rc, i, thread_id = (int)a; pid_t pid = getpid(); fcgx_request request; char *server_name; fcgx_initrequest(&request, 0, 0); (;;) { static pthread_mutex_t accept_mutex = pthread_mutex_initializer; static pthread_mutex_t counts_mutex = pthread_mutex_initializer; /* platforms require accept() serialization, don't.. */ pthread_mutex_lock(&accept_mutex); rc = fcgx_accept_r(&request); pthread_mutex_unlock(&accept_mutex); if (rc < 0) break; server_name = fcgx_getparam("server_name", request.envp); fcgx_fprintf(request.out,… … fcgx_finish_r(&request); } return null; } int main(void) { int i; pthread_t id[thread_count]; fcgx_init(); (i = 1; < thread_count; i++) pthread_create(&id[i], null, doit, (void*)i); doit(0); return 0; }
in fastcgi specification there explaination, how web server determine how many connections supported fastcgi app:
the web server can query specific variables within application. server typically perform query on application startup in order to automate aspects of system configuration.
…
• fcgi_max_conns: maximum number of concurrent transport connections application accept, e.g. "1" or "10".
• fcgi_max_reqs: maximum number of concurrent requests application accept, e.g. "1" or "50".
• fcgi_mpxs_conns: "0" if application not multiplex connections (i.e. handle concurrent requests on each connection), "1" otherwise.
but return values query hard coded fastcgi sdk , returns 1 fcgi_max_conns , fcgi_max_reqs , 0 fcgi_mpxs_conns. threaded.c sample never receive multiple connections.
i tested sample lighttpd , nginx , app handled 1 request @ once. how can application handle multiple requests? or wrong approach?
tested threaded.c program http_load. program running behind nginx. there 1 instance of program running. if requests served sequentially, expect take 40 seconds 20 requests if sent in parallel. here results (i used same numbers andrew bradford - 20, 21, , 40) -
20 requests, 20 in parallel, took 2 seconds -
$ http_load -parallel 20 -fetches 20 request.txt 20 fetches, 20 max parallel, 6830 bytes, in 2.0026 seconds 341.5 mean bytes/connection 9.98701 fetches/sec, 3410.56 bytes/sec msecs/connect: 0.158 mean, 0.256 max, 0.093 min msecs/first-response: 2001.5 mean, 2002.12 max, 2000.98 min http response codes: code 200 -- 20
21 requests, 20 in parallel, took 4 seconds -
$ http_load -parallel 20 -fetches 21 request.txt 21 fetches, 20 max parallel, 7171 bytes, in 4.00267 seconds 341.476 mean bytes/connection 5.2465 fetches/sec, 1791.55 bytes/sec msecs/connect: 0.253714 mean, 0.366 max, 0.145 min msecs/first-response: 2001.51 mean, 2002.26 max, 2000.86 min http response codes: code 200 -- 21
40 requests, 20 in parallel, took 4 seconds -
$ http_load -parallel 20 -fetches 40 request.txt 40 fetches, 20 max parallel, 13660 bytes, in 4.00508 seconds 341.5 mean bytes/connection 9.98732 fetches/sec, 3410.67 bytes/sec msecs/connect: 0.159975 mean, 0.28 max, 0.079 min msecs/first-response: 2001.86 mean, 2002.62 max, 2000.95 min http response codes: code 200 -- 40
so, proves if fcgi_max_conns, fcgi_max_reqs, , fcgi_mpxs_conns values hard-coded, requests served in parallel.
when nginx receives multiple requests, puts them in fcgi application's queue back. not wait response first request before sending second request. in fcgi application, when thread serving first request whatever time, thread not waiting first 1 finish, pick second request , start working on it. , on.
so, time lose time takes read request queue. time negligible compared time takes process request.
Comments
Post a Comment