502、504 状态码是Nginx常见的两种错误码,但你未必完全知道它俩的含义,本文将分析源码并抓包来详细分析下这俩错误码。

通常遇到502、504 发生在Nginx作为代理服务器请求上游服务器失败时,为了便于理解下边分析过程,正始开始之前有必要先来回顾下Nginx Upstream模块的流程,关于这部分详细可以移步Nginx Upstream流程分析

nginx upstream

有了这层框架后,下面我们进入正题。

源码分析

先来看下负责和上游服务器连接的ngx_http_upstream_connect函数

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
static void
ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u)
{
    ngx_int_t                  rc;
    ngx_connection_t          *c;
    ngx_http_core_loc_conf_t  *clcf;

    r->connection->log->action = "connecting to upstream";

    /*
    ... 省略
    */

    // 连接上游服务器
    rc = ngx_event_connect_peer(&u->peer);

    ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
                   "http upstream connect: %i", rc);

    if (rc == NGX_ERROR) {
        ngx_http_upstream_finalize_request(r, u,
                                           NGX_HTTP_INTERNAL_SERVER_ERROR);
        return;
    }

    u->state->peer = u->peer.name;

    if (rc == NGX_BUSY) {
        ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no live upstreams");
        ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_NOLIVE);
        return;
    }


    // socket创建失败
    if (rc == NGX_DECLINED) {
        // 尝试下一个上游服务器
        ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
        return;
    }

    /* rc == NGX_OK || rc == NGX_AGAIN || rc == NGX_DONE */

    /* 
        ...省略
    */

    ngx_http_upstream_send_request(r, u, 1);
}

通过源码可以看到当创建与上游服务器连接的socket失败后,会带着NGX_HTTP_UPSTREAM_FT_ERROR这个错误类型重试与下一个上游服务器建连。

接下来看下ngx_http_upstream_next函数,分析重试逻辑

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
static void
ngx_http_upstream_next(ngx_http_request_t *r, ngx_http_upstream_t *u,
    ngx_uint_t ft_type)
{
    ngx_msec_t  timeout;
    ngx_uint_t  status, state;

    ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
                   "http next upstream, %xi", ft_type);

    if (u->peer.sockaddr) {

        if (u->peer.connection) {
            u->state->bytes_sent = u->peer.connection->sent;
        }

        if (ft_type == NGX_HTTP_UPSTREAM_FT_HTTP_403
            || ft_type == NGX_HTTP_UPSTREAM_FT_HTTP_404)
        {
            state = NGX_PEER_NEXT;

        } else {
            state = NGX_PEER_FAILED;
        }

        u->peer.free(&u->peer, u->peer.data, state);
        u->peer.sockaddr = NULL;
    }

    if (ft_type == NGX_HTTP_UPSTREAM_FT_TIMEOUT) {
        ngx_log_error(NGX_LOG_ERR, r->connection->log, NGX_ETIMEDOUT,
                      "upstream timed out");
    }

    if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {
        /* TODO: inform balancer instead */
        u->peer.tries++;
    }

    switch (ft_type) {

    case NGX_HTTP_UPSTREAM_FT_TIMEOUT:
    case NGX_HTTP_UPSTREAM_FT_HTTP_504:
        status = NGX_HTTP_GATEWAY_TIME_OUT;
        break;

    case NGX_HTTP_UPSTREAM_FT_HTTP_500:
        status = NGX_HTTP_INTERNAL_SERVER_ERROR;
        break;

    case NGX_HTTP_UPSTREAM_FT_HTTP_503:
        status = NGX_HTTP_SERVICE_UNAVAILABLE;
        break;

    case NGX_HTTP_UPSTREAM_FT_HTTP_403:
        status = NGX_HTTP_FORBIDDEN;
        break;

    case NGX_HTTP_UPSTREAM_FT_HTTP_404:
        status = NGX_HTTP_NOT_FOUND;
        break;

    case NGX_HTTP_UPSTREAM_FT_HTTP_429:
        status = NGX_HTTP_TOO_MANY_REQUESTS;
        break;

    /*
     * NGX_HTTP_UPSTREAM_FT_BUSY_LOCK and NGX_HTTP_UPSTREAM_FT_MAX_WAITING
     * never reach here
     */
		// 默认502错误码
    default:
        status = NGX_HTTP_BAD_GATEWAY;
    }

    if (r->connection->error) {
        ngx_http_upstream_finalize_request(r, u,
                                           NGX_HTTP_CLIENT_CLOSED_REQUEST);
        return;
    }

    u->state->status = status;

    timeout = u->conf->next_upstream_timeout;

    if (u->request_sent
        && (r->method & (NGX_HTTP_POST|NGX_HTTP_LOCK|NGX_HTTP_PATCH)))
    {
        ft_type |= NGX_HTTP_UPSTREAM_FT_NON_IDEMPOTENT;
    }

    // 已经没有可尝试的节点了或达到最大尝试次数了,需要结束请求,返回对应错误码
    if (u->peer.tries == 0 // 已经没有可尝试的节点了或达到最大尝试次数
        || ((u->conf->next_upstream & ft_type) != ft_type) // proxy_next_upstream error timeout; 重试错误类型不匹配时
        || (u->request_sent && r->request_body_no_buffering) // 当proxy_request_buffering off; 请求已经发送给上游
        || (timeout && ngx_current_msec - u->peer.start_time >= timeout)) // proxy_next_upstream_timeout 重试超时
    {

        ngx_http_upstream_finalize_request(r, u, status);
        return;
    }

    // ...省略

    ngx_http_upstream_connect(r, u);
}

这里已经可以清晰得出:

当遇到错误需要终止请求时,除了几种明确的错误类型外,默认返回502错误码

上一步socket创建失败时的NGX_HTTP_UPSTREAM_FT_ERROR不属于几种明确错误类型自然也就返回502。

同时,通过这几行代码也能看到504错误属于已明确的错误类型的其中之一。

1
2
3
4
    case NGX_HTTP_UPSTREAM_FT_TIMEOUT:
    case NGX_HTTP_UPSTREAM_FT_HTTP_504:
        status = NGX_HTTP_GATEWAY_TIME_OUT;
        break;

想知道什么情况下会返回504,就得分析什么情况下会返回NGX_HTTP_UPSTREAM_FT_TIMEOUTNGX_HTTP_UPSTREAM_FT_HTTP_504类型,于是我们查找代码发现主要有两处:

  • ngx_http_upstream_send_request_handler 发送请求
  • ngx_http_upstream_process_header 处理header

它们正好也是上图中peer.get 到 ngx_http_upstream_next 之间的流程部分,其实如果不用反查代码话,正向分析话,我们也应该分析这些流程部分。

ngx_http_upstream_send_request_handler

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
static void
ngx_http_upstream_send_request_handler(ngx_http_request_t *r,
    ngx_http_upstream_t *u)
{
    ngx_connection_t  *c;

    c = u->peer.connection;

    ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
                   "http upstream send request handler");

    if (c->write->timedout) {
        ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_TIMEOUT);
        return;
    }

// ... 省略

    ngx_http_upstream_send_request(r, u, 1);
}

ngx_http_upstream_process_header

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
static void
ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u)
{
    ssize_t            n;
    ngx_int_t          rc;
    ngx_connection_t  *c;

    c = u->peer.connection;

    ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
                   "http upstream process header");

    c->log->action = "reading response header from upstream";

    if (c->read->timedout) {
        ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_TIMEOUT);
        return;
    }

// ...省略

    ngx_http_upstream_send_response(r, u);
}

此时可以小结下:

  • Nginx在给上游发送请或读取header超时会返回504;
  • 与上游建连失败时会返回502错误;

注意下,此结论仅针对以上分析过的逻辑,显然它不是全部,由于篇幅有限不可能列出全部代码,实际上在给上游发送请求或请取header的过程中都有可能遇到错误(比如上游发RST)而结束请求并返回502错误。总之上边的分析是方便我们快速理清整个框架。

接下来看下nginx中有关反向代理超时的配置,

反向代理超时参数

1
2
3
4
proxy_connect_timeout      60;
proxy_read_timeout         60;
proxy_send_timeout         60;
proxy_next_upstream_timeout             0;

可以看到proxy_read_timeoutproxy_send_timeout 对应上边我们分析的两种超时情况。

proxy_next_upstream_timeout 作用于ngx_http_upstream_next 函数中重试超时时间。

proxy_connect_timeout 上游连接超时时间,它是通过 ngx_add_timer添加定时器实现的,这里我们不作分析,后边会测试这种情况。

实验验证

模拟一个响应慢的上游server

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
package main

import (
    "fmt"
    "net/http"
    "time"
)

func main() {
    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        time.Sleep(30* time.Second) // 延迟返回
        fmt.Fprintf(w, "ok!")
    })

    http.ListenAndServe(":3000", nil)
}

nginx 配置

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
    upstream my_upstream {
        server 127.0.0.1:3000; #上边的模拟server
        server 127.0.0.1:3001; #端口不通
        server 127.0.0.1:3002; #端口不通
        server 127.0.0.1:3003; #端口不通
    }
    server {
        listen 8080;
        access_log  logs/dev.log main;

        rewrite_by_lua_block {;}
        access_by_lua_block {;}
        log_by_lua_block {;}

        location /upstream {
            proxy_connect_timeout      3;
            proxy_read_timeout         3;
            proxy_send_timeout         3;
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0; #不限
            proxy_next_upstream_tries               0; #不限制重试次数
            proxy_pass                              http://my_upstream;
        }
    }
}

Curl 模拟客户端

1
2
3
4
5
6
7
➜  ~ curl localhost:8080/upstream --head
HTTP/1.1 504 Gateway Time-out
Server: openresty
Date: Wed, 15 Nov 2023 07:22:25 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 164
Connection: keep-alive

看下Nginx错误日志

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
2023/11/15 15:22:22 [error] 10192#4365279: *2361 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3003/upstream", host: "localhost:8080"
2023/11/15 15:22:22 [warn] 10192#4365279: *2361 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3003/upstream", host: "localhost:8080"
2023/11/15 15:22:22 [error] 10192#4365279: *2361 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3002/upstream", host: "localhost:8080"
2023/11/15 15:22:22 [warn] 10192#4365279: *2361 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3002/upstream", host: "localhost:8080"
2023/11/15 15:22:22 [error] 10192#4365279: *2361 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3001/upstream", host: "localhost:8080"
2023/11/15 15:22:22 [warn] 10192#4365279: *2361 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3001/upstream", host: "localhost:8080"
2023/11/15 15:22:25 [warn] 10192#4365279: *2361 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3000/upstream", host: "localhost:8080"
2023/11/15 15:22:25 [error] 10192#4365279: *2361 upstream timed out (60: Operation timed out) while reading response header from upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3000/upstream", host: "localhost:8080"
2023/11/15 15:22:25 [info] 10192#4365279: *2361 kevent() reported that client 127.0.0.1 closed keepalive connection
2023/11/15 15:13:41 [error] 10192#4365279: *735 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3003/upstream", host: "localhost:8080"
2023/11/15 15:13:41 [warn] 10192#4365279: *735 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3003/upstream", host: "localhost:8080"
2023/11/15 15:13:41 [error] 10192#4365279: *735 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3002/upstream", host: "localhost:8080"
2023/11/15 15:13:41 [warn] 10192#4365279: *735 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3002/upstream", host: "localhost:8080"
2023/11/15 15:13:41 [error] 10192#4365279: *735 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3001/upstream", host: "localhost:8080"
2023/11/15 15:13:41 [warn] 10192#4365279: *735 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3001/upstream", host: "localhost:8080"
2023/11/15 15:13:44 [warn] 10192#4365279: *735 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3000/upstream", host: "localhost:8080"
2023/11/15 15:13:44 [error] 10192#4365279: *735 upstream timed out (60: Operation timed out) while reading response header from upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3000/upstream", host: "localhost:8080"
2023/11/15 15:13:44 [info] 10192#4365279: *735 kevent() reported that client 127.0.0.1 closed keepalive connection

抓包看下

image-20231115152820929

从图中可以获取几点信息:

  • Nginx默认使用 wrr 负载均衡算法选取上游节点,而不是顺序选择的;

  • 因为proxy_next_upstream_timeoutproxy_next_upstream_tries都设置为0,不限制重试次数和时间,Nginx会尝试请求全部上游节点;

  • 对照图分析下过程。

    由于3001-3003端口是不存在的,在连接握手阶段 SYN 包发出后立马收到RST包,连接报错,又因为proxy_next_upstream error timeout;设置了遇到错误和超时会重试,所以Nginx会尝试请求下一个节点,直到走到3001端口时,服务响应比较慢,proxy_read_timeout 3; 3秒后读取超时时间到了,这时已经没有可重试的节点了需要立即返回,客户端收到504状态码;

因为选择上游节点的顺序是随机的,实际上边的测试返回结果不是固定返回504的,也可能会返回502。

1
2
3
4
5
6
7
➜  ~ curl localhost:8080/upstream --head
HTTP/1.1 502 Bad Gateway
Server: openresty
Date: Wed, 15 Nov 2023 07:56:37 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 154
Connection: keep-alive

什么情况下会返回502呢?

当最后一次重试落到3001-3003其中一个时就会返回502。

再次抓包验证下

image-20231115155744143

分析下过程,先请求3003端口,连接失败,再请求3000,连接成功,3s过后读取超时,尝试3001端口,连接失败,尝试请求3002端口,连接失败,已经没有可尝试的节点了,返回502状态码。

Nginx错误日志

1
2
3
4
5
6
7
8
9
2023/11/15 15:56:34 [error] 10192#4365279: *8721 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3003/upstream", host: "localhost:8080"
2023/11/15 15:56:34 [warn] 10192#4365279: *8721 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3003/upstream", host: "localhost:8080"
2023/11/15 15:56:37 [warn] 10192#4365279: *8721 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3000/upstream", host: "localhost:8080"
2023/11/15 15:56:37 [error] 10192#4365279: *8721 upstream timed out (60: Operation timed out) while reading response header from upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3000/upstream", host: "localhost:8080"
2023/11/15 15:56:37 [error] 10192#4365279: *8721 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3001/upstream", host: "localhost:8080"
2023/11/15 15:56:37 [warn] 10192#4365279: *8721 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3001/upstream", host: "localhost:8080"
2023/11/15 15:56:37 [error] 10192#4365279: *8721 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3002/upstream", host: "localhost:8080"
2023/11/15 15:56:37 [warn] 10192#4365279: *8721 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://127.0.0.1:3002/upstream", host: "localhost:8080"
2023/11/15 15:56:37 [info] 10192#4365279: *8721 kevent() reported that client 127.0.0.1 closed keepalive connection

同样能看到重试过程。

连接超时

上边介绍超时参数时提到过连接超时参数proxy_connect_timeout,它设置TCP握手超时时间,同样做个小测试方便理解这种情况。

nginx配置

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
    upstream my_upstream {
        server 8.130.x.x; #服务器防火墙添加策略阻止连接端口,模拟握手超时
    }
    server {
        listen 8080;
        access_log  logs/dev.log main;
        
        rewrite_by_lua_block {;}
        access_by_lua_block {;}
        log_by_lua_block {;}

        location /upstream {
            proxy_connect_timeout      3;
            proxy_read_timeout         3;
            proxy_send_timeout         3;
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0; #不限
            proxy_next_upstream_tries               0; #不限制重试次数
            proxy_pass                              http://my_upstream;
        }
    }
}

curl 模拟客户端请求

1
2
3
4
5
6
7
➜  ~ curl localhost:8080/upstream --head
HTTP/1.1 504 Gateway Time-out
Server: openresty
Date: Wed, 15 Nov 2023 12:47:40 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 164
Connection: keep-alive

日志

1
2
2023/11/15 16:24:55 [error] 11273#4430180: *5 upstream timed out (60: Operation timed out) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "GET /upstream HTTP/1.1", upstream: "http://8.130.46.15:80/upstream", host: "xnile.cn:8080"
2023/11/15 16:24:55 [info] 11273#4430180: *5 kevent() reported that client 127.0.0.1 closed keepalive connection

详细打印了错误原因,连接上游超时。

抓包

image-20231115204310447

因为防火墙的阻止,迟迟收不到 SYN 包,3s后超时时间到了,返回给客户端504。

如果把proxy_connect_timeout设置的足够大又会怎样呢,还会返回504吗?

修改下nginx配置

1
proxy_connect_timeout      75; #It should be noted that this timeout cannot usually exceed 75 seconds.

再次用curl看下

1
2
3
4
5
6
7
8
9
➜  ~ time curl localhost:8080/upstream --head
HTTP/1.1 502 Bad Gateway
Server: openresty
Date: Wed, 15 Nov 2023 13:33:42 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 154
Connection: keep-alive

curl localhost:8080/upstream --head  0.01s user 0.02s system 0% cpu 1:15.03 total

返回502,不是504了

错误日志

1
2
2023/11/15 15:33:42 [error] 11944#4453925: *5284 kevent() reported that connect() failed (60: Operation timed out) while connecting to upstream, client: 127.0.0.1, server: xnile, request: "HEAD /upstream HTTP/1.1", upstream: "http://8.130.46.15:80/upstream", host: "localhost:8080"
2023/11/15 15:33:42 [info] 11944#4453925: *5284 kevent() reported that client 127.0.0.1 closed keepalive connection

抓包

image-20231115222718002

分析下原因:

因为收不到ack包,系统会尝试重发sync包,直到最大次数不再尝试通知上次socket连接超时,又因为proxy_connect_timeout设置的足够大,并没到达超时时间,所以并没有返回504,而是502。

PS:因为不同平台 sync重试次数,间隔时间不太一样,所以总的重试时间也可能会大于proxy_connect_timeout设置的时间,此时就不会再是返回502了,而是504。

ps: 上边测试结果基于macos系统。

总结

  • 当配置了

    • proxy_connect_timeout
    • proxy_read_timeout
    • proxy_send_timeout
    • proxy_next_upstream_timeout

    上边四种超时参数后,到了超时时间后Nginx便会返加504错误;

  • 502更像是请求失败的超集,504是它的子集,特指连接超时导致的请求失败这一具体情形。

    以上分析只是抛砖引玉,并不够全面,没有面面具到可能引起502,504的所有情况,具体遇到时可以通过查看错误日志辅助定位问题。