求助nginx代理tomcat后访问非常慢(tomcat本身很快,win7)

发布于 2021-12-06 10:41:13 字数 350 浏览 785 评论 17

几天都解决不了,求大神指点,nginx日志及配置:https://files.cnblogs.com/files/yanan7890/nginx配置及日志.zip

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(17

猫烠⑼条掵仅有一顆心 2021-12-08 15:18:53

是的。不涉及任何业务及静态资源的情况下,稳定并发大约2000个

小情绪 2021-12-08 15:18:50

公司及几千家客户只支持win7

不再见 2021-12-08 15:18:50

upstream 配置 keepalive,注意

  1. 要设置 proxy 为 http1.1模式
  2. 强制 Connection 头为 keepalive。

不然虽然upstream 期待 keepalive,但是 tomcat 收到的都是 http1.0 的,每个请求结束都 close。

 

按照上面要点设置后(具体防伸手党不求甚解,就不发出来了),性能可以提升一大截。

但是就如之前的几位答主所说,Windows 跑 nginx 除非只是内网系统,否则高并发下性能基本是个渣。

冷默言语 2021-12-08 15:18:49

你用linux虚拟机测,用啥win,没事找事

情痴 2021-12-08 15:18:47

谢谢你的优质回答。优化tomcat后,单个tomcat压测可以支持5000并发,用nginx后压测达不到5000。这样的话用nginx是不是没有任何优势可以舍弃了?

笑红尘 2021-12-08 15:18:42

回复
linux才素nginx发挥的地方。win和nginx只能选一个。一定要win,只能换反代,apache还是可以的样纸.

千笙结 2021-12-08 15:18:40

回复
tomcat是怎么配置的?能达到5000也是很厉害的啊

孤檠 2021-12-08 15:18:34

回复
@傻根她弟 : 我设置堆栈内存占电脑的1/4,并发量立刻上去了

浅沫记忆 2021-12-08 15:16:08

win系统事件模型的问题,nginx在win下就是这么慢,无解的。

偏爱自由 2021-12-08 15:13:17

access.log如下:除了时间不一样其他都一样不重复了

127.0.0.1 - - [13/Sep/2018:21:46:42 +0800] "GET /HerPeisReport/weixin/mainpage_check2?id=62&sex=1 HTTP/1.1" 200 14820 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_91)"

 

皇甫轩 2021-12-08 15:08:49

error.log如下:

2018/09/13 21:46:31 [notice] 51148#43468: sockinit() attempting to access sockapi
2018/09/13 21:46:31 [notice] 51148#43468: Access to sockapi succeded!
2018/09/13 21:46:31 [notice] 51148#43468: using sockapi from "4;"
2018/09/13 21:46:32 [notice] 54408#56072: sockinit() attempting to access sockapi
2018/09/13 21:46:32 [notice] 54408#56072: Access to sockapi succeded!
2018/09/13 21:46:32 [notice] 54408#56072: using sockapi from "4;"
2018/09/13 21:46:32 [notice] 63344#24000: sockinit() attempting to access sockapi
2018/09/13 21:46:32 [notice] 63344#24000: Access to sockapi succeded!
2018/09/13 21:46:32 [notice] 63344#24000: using sockapi from "4;"
2018/09/13 21:46:32 [notice] 30840#66752: sockinit() attempting to access sockapi
2018/09/13 21:46:32 [notice] 30840#66752: Access to sockapi succeded!
2018/09/13 21:46:32 [notice] 30840#66752: using sockapi from "4;"
2018/09/13 21:46:33 [notice] 29300#51960: sockinit() attempting to access sockapi
2018/09/13 21:46:33 [notice] 29300#51960: Access to sockapi succeded!
2018/09/13 21:46:33 [notice] 29300#51960: using sockapi from "4;"
2018/09/13 21:46:33 [notice] 57544#63744: sockinit() attempting to access sockapi
2018/09/13 21:46:33 [notice] 57544#63744: Access to sockapi succeded!
2018/09/13 21:46:33 [notice] 57544#63744: using sockapi from "4;"
2018/09/13 21:46:33 [notice] 14760#20868: sockinit() attempting to access sockapi
2018/09/13 21:46:33 [notice] 14760#20868: Access to sockapi succeded!
2018/09/13 21:46:33 [notice] 14760#20868: using sockapi from "4;"
2018/09/13 21:46:34 [notice] 18736#7628: sockinit() attempting to access sockapi
2018/09/13 21:46:34 [notice] 18736#7628: Access to sockapi succeded!
2018/09/13 21:46:34 [notice] 18736#7628: using sockapi from "4;"
2018/09/13 21:47:47 [notice] 64836#17656: signal process started (stop)

 

恋你朝朝暮暮 2021-12-08 15:06:08

nginx.conf如下:


#user  nobody;
# multiple workers works !
worker_processes  8;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

#pcre_jit on;

events {
    worker_connections  20480;
    # max value 32768, nginx recycling connections+registry optimization = 
    #   this.value * 20 = max concurrent connections currently tested with one worker
    #   C1000K should be possible depending there is enough ram/cpu power
    # multi_accept on;
}


http {
    #include      /nginx/conf/naxsi_core.rules;
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr $remote_port - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

## loadbalancing PHP
#     upstream myLoadBalancer {
#         server 127.0.0.1:9001 weight=1 fail_timeout=5;
#         server 127.0.0.1:9002 weight=1 fail_timeout=5;
#         server 127.0.0.1:9003 weight=1 fail_timeout=5;
#         server 127.0.0.1:9004 weight=1 fail_timeout=5;
#         server 127.0.0.1:9005 weight=1 fail_timeout=5;
#         server 127.0.0.1:9006 weight=1 fail_timeout=5;
#         server 127.0.0.1:9007 weight=1 fail_timeout=5;
#         server 127.0.0.1:9008 weight=1 fail_timeout=5;
#         server 127.0.0.1:9009 weight=1 fail_timeout=5;
#         server 127.0.0.1:9010 weight=1 fail_timeout=5;
#         least_conn;
#     }
    #upstream表示负载服务器池,定义名字为naire的服务器池.将client端的请求分发
    upstream naire {
	  #weight(权重) 指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。如下所示,10.0.0.88的访问比率要比10.0.0.77的访问比率高一倍。
	  #设置由 fail_timeout 定义的时间段内max_fails内该主机是否可用。
	  #max_fails设置在指定时间内连接到主机的失败次数,超过该次数该主机被认为不可用。这里是在30s内尝试2次失败即认为主机不可用!默认情况下这个数值设置为 1。零值的话禁用这个数量的尝试。
	  #server 127.0.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;
	  server 127.0.0.1:8080;
          server 127.0.0.1:8081;
	  server 127.0.0.1:8082;
	  server 127.0.0.1:8083;
	  server 127.0.0.1:8079;
	  #ip_hash;#每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。
	  
	  keepalive 10240;
    }
    sendfile        off;
    #tcp_nopush     on;

    server_names_hash_bucket_size 128;
    map_hash_bucket_size 64;

## Start: Timeouts ##
    client_body_timeout   10;
    client_header_timeout 10;
    keepalive_timeout     30;
    send_timeout          10;
    keepalive_requests    10;
## End: Timeouts ##

    #gzip  on;

    server {
        listen       81;#虚拟主机监听端口,监听该端口并转发到upstream中定义的端口
        server_name  127.0.0.1;#监听地址  定义使用127.0.0.1访问
        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        ## Caching Static Files, put before first location
        #location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
        #    expires 14d;
        #    add_header Vary Accept-Encoding;
        #}

#For Naxsi remove the single# line for learn mode, or the ## lines for full WAF mode
        #location块:配置请求的路由,以及各种页面的处理情况。
        location / {
           proxy_pass http://naire;  #请求转向naire(upstream块) 定义的服务器列表
        }

#For Naxsi remove the## lines for full WAF mode, redirect location block used by naxsi
        ##location /RequestDenied {
        ##    return 412;
        ##}

## Lua examples !
#         location /robots.txt {
#           rewrite_by_lua '
#             if ngx.var.http_host ~= "localhost" then
#               return ngx.exec("/robots_disallow.txt");
#             end
#           ';
#         }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ .php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ .php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000; # single backend process
        #fastcgi_pass   myLoadBalancer;# or multiple, see example above
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl http2;
    #    server_name  localhost;

    #    ssl                  on;
    #    ssl_certificate      c:/nginx/crts/cert.pem;
    #    ssl_certificate_key  c:/nginx/crts/cert.key;

    #    ssl_session_timeout  5m;

    #    ssl_prefer_server_ciphers On;
    #    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    #    ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:ECDH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!eNULL:!MD5:!DSS:!EXP:!ADH:!LOW:!MEDIUM;

    #    Logjam (not really required to fix it, above cipher works too)
    #    ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:ECDH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!eNULL:!MD5:!DSS:!EXP:!ADH:!LOW:!MEDIUM:!DES:!RC4:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}

 

辞别 2021-12-08 15:05:05

worker_processes 8个,cpu也是8个, worker_connections 20480个

噩梦成真你也成魔 2021-12-08 14:12:22

之前也有过这个问题,worker开了几个?

归途 2021-12-08 12:36:07

发出来怕特别乱,下方评论区我补上

臻嫒无言 2021-12-08 12:24:34

第一次看到问问题打压缩包的,

天涯离梦残月幽梦 2021-12-08 12:08:39

引用来自“foy”的评论

win系统事件模型的问题,nginx在win下就是这么慢,无解的。

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文