Browse Source

haproxy: bump to version 1.5.16

[RELEASE] Released version 1.5.16

  Released version 1.5.16 with the following main changes :
    - BUG/BUILD: replace haproxy-systemd-wrapper with $(EXTRA) in install-bin.
    - BUG/MINOR: acl: don't use record layer in req_ssl_ver
    - BUG: http: do not abort keep-alive connections on server timeout
    - BUG/MEDIUM: http: switch the request channel to no-delay once done.
    - MINOR: config: extend the default max hostname length to 64 and beyond
    - BUG/MEDIUM: http: don't enable auto-close on the response side
    - BUG/MEDIUM: stream: fix half-closed timeout handling
    - BUG/MEDIUM: cli: changing compression rate-limiting must require admin level
    - BUILD: freebsd: double declaration
    - BUG/MEDIUM: sample: urlp can't match an empty value
    - BUG/MEDIUM: peers: table entries learned from a remote are pushed to others after a random delay.
    - BUG/MEDIUM: peers: old stick table updates could be repushed.
    - CLEANUP: haproxy: using _GNU_SOURCE instead of __USE_GNU macro.
    - BUG/MINOR: chunk: make chunk_dup() always check and set dst->size
    - MINOR: chunks: ensure that chunk_strcpy() adds a trailing zero
    - MINOR: chunks: add chunk_strcat() and chunk_newstr()
    - MINOR: chunk: make chunk_initstr() take a const string
    - BUG/MEDIUM: config: Adding validation to stick-table expire value.
    - BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week
    - BUG/MEDIUM: channel: fix miscalculation of available buffer space.
    - BUG/MINOR: stream: don't force retries if the server is DOWN
    - MINOR: unix: don't mention free ports on EAGAIN
    - BUG/CLEANUP: CLI: report the proper field states in "show sess"
    - MINOR: stats: send content-length with the redirect to allow keep-alive
    - BUG: stream_interface: Reuse connection even if the output channel is empty
    - DOC: remove old tunnel mode assumptions
    - DOC: add server name at rate-limit sessions example
    - BUG/MEDIUM: ssl: fix off-by-one in ALPN list allocation
    - BUG/MEDIUM: ssl: fix off-by-one in NPN list allocation
    - BUG/MEDIUM: stats: stats bind-process doesn't propagate the process mask correctly
    - BUG/MINOR: http: Be sure to process all the data received from a server
    - BUG/MEDIUM: chunks: always reject negative-length chunks
    - BUG/MINOR: systemd: ensure we don't miss signals
    - BUG/MINOR: systemd: report the correct signal in debug message output
    - BUG/MINOR: systemd: propagate the correct signal to haproxy
    - MINOR: systemd: ensure a reload doesn't mask a stop
    - CLEANUP: stats: Avoid computation with uninitialized bits.
    - CLEANUP: pattern: Ignore unknown samples in pat_match_ip().
    - CLEANUP: map: Avoid memory leak in out-of-memory condition.
    - BUG/MINOR: tcpcheck: conf parsing error when no port configured on server and last rule is a CONNECT with no port
    - BUG/MINOR: tcpcheck: fix incorrect list usage resulting in failure to load certain configs
    - MINOR: cfgparse: warn when uid parameter is not a number
    - MINOR: cfgparse: warn when gid parameter is not a number
    - BUG/MINOR: standard: Avoid free of non-allocated pointer
    - BUG/MINOR: pattern: Avoid memory leak on out-of-memory condition
    - CLEANUP: http: fix a build warning introduced by a recent fix
    - BUG/MINOR: log: GMT offset not updated when entering/leaving DST

Signed-off-by: heil <heil@terminal-consulting.de>
lilik-openwrt-22.03
heil 9 years ago
parent
commit
9c394b4c1b
14 changed files with 3 additions and 666 deletions
  1. +3
    -3
      net/haproxy/Makefile
  2. +0
    -34
      net/haproxy/patches/0001-BUG-BUILD-replace-haproxy-systemd-wrapper-with-EXTRA.patch
  3. +0
    -69
      net/haproxy/patches/0002-BUG-MINOR-acl-don-t-use-record-layer-in-req_ssl_ver.patch
  4. +0
    -37
      net/haproxy/patches/0003-BUG-http-do-not-abort-keep-alive-connections-on-serv.patch
  5. +0
    -112
      net/haproxy/patches/0004-BUG-MEDIUM-http-switch-the-request-channel-to-no-del.patch
  6. +0
    -52
      net/haproxy/patches/0005-MINOR-config-extend-the-default-max-hostname-length-.patch
  7. +0
    -49
      net/haproxy/patches/0006-BUG-MEDIUM-http-don-t-enable-auto-close-on-the-respo.patch
  8. +0
    -88
      net/haproxy/patches/0007-BUG-MEDIUM-stream-fix-half-closed-timeout-handling.patch
  9. +0
    -36
      net/haproxy/patches/0008-BUG-MEDIUM-cli-changing-compression-rate-limiting-mu.patch
  10. +0
    -31
      net/haproxy/patches/0009-BUILD-freebsd-double-declaration.patch
  11. +0
    -53
      net/haproxy/patches/0010-BUG-MEDIUM-sample-urlp-can-t-match-an-empty-value.patch
  12. +0
    -31
      net/haproxy/patches/0011-BUG-MEDIUM-peers-table-entries-learned-from-a-remote.patch
  13. +0
    -28
      net/haproxy/patches/0012-BUG-MEDIUM-peers-old-stick-table-updates-could-be-re.patch
  14. +0
    -43
      net/haproxy/patches/0013-CLEANUP-haproxy-using-_GNU_SOURCE-instead-of-__USE_G.patch

+ 3
- 3
net/haproxy/Makefile View File

@ -9,12 +9,12 @@
include $(TOPDIR)/rules.mk
PKG_NAME:=haproxy
PKG_VERSION:=1.5.15
PKG_RELEASE:=13
PKG_VERSION:=1.5.16
PKG_RELEASE:=01
PKG_SOURCE:=haproxy-$(PKG_VERSION).tar.gz
PKG_SOURCE_URL:=http://haproxy.1wt.eu/download/1.5/src/
PKG_BUILD_DIR:=$(BUILD_DIR)/$(PKG_NAME)-$(BUILD_VARIANT)/$(PKG_NAME)-$(PKG_VERSION)
PKG_MD5SUM:=eeaa35744f84c92184cd735ee56dd0a3
PKG_MD5SUM:=294fdb5aaaccba00c2070e5f4baf9f0e
PKG_MAINTAINER:=Thomas Heil <heil@terminal-consulting.de>
PKG_LICENSE:=GPL-2.0


+ 0
- 34
net/haproxy/patches/0001-BUG-BUILD-replace-haproxy-systemd-wrapper-with-EXTRA.patch View File

@ -1,34 +0,0 @@
From 4818bc3035bccc00d8c3fc9b14ec37366cac3059 Mon Sep 17 00:00:00 2001
From: Jerome Duval <jerome.duval@gmail.com>
Date: Mon, 2 Nov 2015 17:47:43 +0000
Subject: [PATCH 01/10] BUG/BUILD: replace haproxy-systemd-wrapper with
$(EXTRA) in install-bin.
[wt: this should be backported to 1.6 and 1.5 as well since some platforms
don't build the systemd-wrapper]
(cherry picked from commit 796d2fc136359c31c5c35f00c0751890ab42a016)
(cherry picked from commit 9d0b47d96825b0584ea81c826a96ed8babcc016b)
---
Makefile | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/Makefile b/Makefile
index 9556069..e3199b2 100644
--- a/Makefile
+++ b/Makefile
@@ -719,10 +719,9 @@ install-doc:
install -m 644 doc/$$x.txt "$(DESTDIR)$(DOCDIR)" ; \
done
-install-bin: haproxy haproxy-systemd-wrapper
+install-bin: haproxy $(EXTRA)
install -d "$(DESTDIR)$(SBINDIR)"
- install haproxy "$(DESTDIR)$(SBINDIR)"
- install haproxy-systemd-wrapper "$(DESTDIR)$(SBINDIR)"
+ install haproxy $(EXTRA) "$(DESTDIR)$(SBINDIR)"
install: install-bin install-man install-doc
--
2.4.10

+ 0
- 69
net/haproxy/patches/0002-BUG-MINOR-acl-don-t-use-record-layer-in-req_ssl_ver.patch View File

@ -1,69 +0,0 @@
From 1af6a324c3206902f69bd2c9838e94ffb4cee3ae Mon Sep 17 00:00:00 2001
From: Lukas Tribus <luky-37@hotmail.com>
Date: Thu, 5 Nov 2015 13:59:30 +0100
Subject: [PATCH 02/10] BUG/MINOR: acl: don't use record layer in req_ssl_ver
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The initial record layer version in a SSL handshake may be set to TLSv1.0
or similar for compatibility reasons, this is allowed as per RFC5246
Appendix E.1 [1]. Some implementations are Openssl [2] and NSS [3].
A related issue has been fixed some time ago in commit 57d229747
("BUG/MINOR: acl: req_ssl_sni fails with SSLv3 record version").
Fix this by using the real client hello version instead of the record
layer version.
This was reported by Julien Vehent and analyzed by Cyril Bonté.
The initial patch is from Julien Vehent as well.
This should be backported to stable series, the req_ssl_ver keyword was
first introduced in 1.3.16.
[1] https://tools.ietf.org/html/rfc5246#appendix-E.1
[2] https://github.com/openssl/openssl/commit/4a1cf50187659e60c5867ecbbc36e37b2605d2c3
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=774547
(cherry picked from commit c93242cab986087f06a4655d14fec18eecb7f5f4)
(cherry picked from commit b048a6eb3d9cb518e4a378e20ba2a801afec553c)
---
src/payload.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/src/payload.c b/src/payload.c
index f62163c..b8f1ca3 100644
--- a/src/payload.c
+++ b/src/payload.c
@@ -148,21 +148,24 @@ smp_fetch_req_ssl_ver(struct proxy *px, struct session *s, void *l7, unsigned in
data = (const unsigned char *)s->req->buf->p;
if ((*data >= 0x14 && *data <= 0x17) || (*data == 0xFF)) {
/* SSLv3 header format */
- if (bleft < 5)
+ if (bleft < 11)
goto too_short;
- version = (data[1] << 16) + data[2]; /* version: major, minor */
+ version = (data[1] << 16) + data[2]; /* record layer version: major, minor */
msg_len = (data[3] << 8) + data[4]; /* record length */
/* format introduced with SSLv3 */
if (version < 0x00030000)
goto not_ssl;
- /* message length between 1 and 2^14 + 2048 */
- if (msg_len < 1 || msg_len > ((1<<14) + 2048))
+ /* message length between 6 and 2^14 + 2048 */
+ if (msg_len < 6 || msg_len > ((1<<14) + 2048))
goto not_ssl;
bleft -= 5; data += 5;
+
+ /* return the client hello client version, not the record layer version */
+ version = (data[4] << 16) + data[5]; /* client hello version: major, minor */
} else {
/* SSLv2 header format, only supported for hello (msg type 1) */
int rlen, plen, cilen, silen, chlen;
--
2.4.10

+ 0
- 37
net/haproxy/patches/0003-BUG-http-do-not-abort-keep-alive-connections-on-serv.patch View File

@ -1,37 +0,0 @@
From ef8a113d59e89b2214adf7ab9f9b0b75905a7050 Mon Sep 17 00:00:00 2001
From: lsenta <laurent.senta@gmail.com>
Date: Fri, 13 Nov 2015 10:44:22 +0100
Subject: [PATCH 03/10] BUG: http: do not abort keep-alive connections on
server timeout
When a server timeout is detected on the second or nth request of a keep-alive
connection, HAProxy closes the connection without writing a response.
Some clients would fail with a remote disconnected exception and some
others would retry potentially unsafe requests.
This patch removes the special case and makes sure a 504 timeout is
written back whenever a server timeout is handled.
Signed-off-by: lsenta <laurent.senta@gmail.com>
(cherry picked from commit 1e1f41d0f3473d86da84dc3785b7d7cbef6e9044)
(cherry picked from commit 1f279c0b116f7fbc208793fffbd256c3c736fc52)
---
src/proto_http.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/proto_http.c b/src/proto_http.c
index 17742c6..e7e1785 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -5782,8 +5782,6 @@ int http_wait_for_response(struct session *s, struct channel *rep, int an_bit)
else if (rep->flags & CF_READ_TIMEOUT) {
if (msg->err_pos >= 0)
http_capture_bad_message(&s->be->invalid_rep, s, msg, msg->msg_state, s->fe);
- else if (txn->flags & TX_NOT_FIRST)
- goto abort_keep_alive;
s->be->be_counters.failed_resp++;
if (objt_server(s->target)) {
--
2.4.10

+ 0
- 112
net/haproxy/patches/0004-BUG-MEDIUM-http-switch-the-request-channel-to-no-del.patch View File

@ -1,112 +0,0 @@
From c0d56134320e507c82952f3d2a03f76b701945cb Mon Sep 17 00:00:00 2001
From: Willy Tarreau <w@1wt.eu>
Date: Wed, 18 Nov 2015 11:59:55 +0100
Subject: [PATCH 04/10] BUG/MEDIUM: http: switch the request channel to
no-delay once done.
There's an issue when sending POST data that came in a second packet,
the CF_NEVER_WAIT flag is not always set on the request channel, while
the server is waiting for the request. We must always set this flag in
this case since we're not going to shut down after sending, contrary
to the response side.
Note that option http-no-delay works around this issue.
Reproducer :
listen px
mode http
timeout client 10s
timeout server 5s
timeout connect 3s
option http-server-close
#option http-no-delay
bind :8001
server s1 127.0.0.1:8003
$ (printf "POST / HTTP/1.1\r\nTransfer-encoding: chunked\r\n\r\n"; sleep 0.01; printf "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n") | nc6 0 8001
Before this fix :
12:03:31.946763 epoll_wait(3, {{EPOLLIN, {u32=5, u64=5}}}, 200, 1000) = 1
12:03:32.634175 accept4(5, {sa_family=AF_INET, sin_port=htons(53849), sin_addr=inet_addr("127.0.0.1")}, [16], SOCK_NONBLOCK) = 6
12:03:32.634318 setsockopt(6, SOL_TCP, TCP_NODELAY, [1], 4) = 0
12:03:32.634434 accept4(5, 0x7ffccfbb2cf0, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
12:03:32.634574 recvfrom(6, "POST / HTTP/1.1\r\nTransfer-encodi"..., 8192, 0, NULL, NULL) = 47
12:03:32.634809 setsockopt(6, SOL_TCP, TCP_QUICKACK, [1], 4) = 0
12:03:32.634952 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
12:03:32.635031 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
12:03:32.635089 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
12:03:32.635153 connect(7, {sa_family=AF_INET, sin_port=htons(8003), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
12:03:32.635315 epoll_wait(3, {}, 200, 0) = 0
12:03:32.635394 sendto(7, "POST / HTTP/1.1\r\nTransfer-encodi"..., 66, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 66
12:03:32.635527 recvfrom(6, 0x7f0224e66024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
12:03:32.635651 epoll_ctl(3, EPOLL_CTL_ADD, 6, {EPOLLIN|0x2000, {u32=6, u64=6}}) = 0
12:03:32.635782 epoll_wait(3, {}, 200, 0) = 0
12:03:32.635842 recvfrom(7, 0x7f0224e66024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
12:03:32.635924 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|0x2000, {u32=7, u64=7}}) = 0
12:03:32.636027 epoll_wait(3, {{EPOLLIN, {u32=6, u64=6}}}, 200, 1000) = 1
12:03:32.644892 recvfrom(6, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 8192, 0, NULL, NULL) = 27
12:03:32.645016 epoll_wait(3, {}, 200, 0) = 0
12:03:32.645105 sendto(7, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 27, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_MORE, NULL, 0) = 27
After the fix :
11:59:12.538617 connect(7, {sa_family=AF_INET, sin_port=htons(8003), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
11:59:12.538787 epoll_wait(3, {}, 200, 0) = 0
11:59:12.538867 sendto(7, "POST / HTTP/1.1\r\nTransfer-encodi"..., 66, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 66
11:59:12.539031 recvfrom(6, 0x7f832ce45024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
11:59:12.539161 epoll_ctl(3, EPOLL_CTL_ADD, 6, {EPOLLIN|0x2000, {u32=6, u64=6}}) = 0
11:59:12.539259 epoll_wait(3, {}, 200, 0) = 0
11:59:12.539337 recvfrom(7, 0x7f832ce45024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
11:59:12.539421 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|0x2000, {u32=7, u64=7}}) = 0
11:59:12.539499 epoll_wait(3, {{EPOLLIN, {u32=6, u64=6}}}, 200, 1000) = 1
11:59:12.548519 recvfrom(6, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 8192, 0, NULL, NULL) = 27
11:59:12.548844 epoll_wait(3, {}, 200, 0) = 0
11:59:12.549012 sendto(7, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 27, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 27
11:59:12.549454 epoll_wait(3, {}, 200, 1000) = 0
This fix must be backported to 1.6, 1.5 and 1.4.
(cherry picked from commit 7f876a1eeb14ffae708327aad8a0b4b029da5e26)
(cherry picked from commit 712a5339f384db62796aa4d4901e091dd7fd24dd)
---
src/proto_http.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/proto_http.c b/src/proto_http.c
index e7e1785..b32e778 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -5001,6 +5001,13 @@ int http_sync_req_state(struct session *s)
*/
chn->cons->flags |= SI_FL_NOHALF;
+ /* In any case we've finished parsing the request so we must
+ * disable Nagle when sending data because 1) we're not going
+ * to shut this side, and 2) the server is waiting for us to
+ * send pending data.
+ */
+ chn->flags |= CF_NEVER_WAIT;
+
if (txn->rsp.msg_state == HTTP_MSG_ERROR)
goto wait_other_side;
@@ -5015,7 +5022,6 @@ int http_sync_req_state(struct session *s)
/* if any side switches to tunnel mode, the other one does too */
channel_auto_read(chn);
txn->req.msg_state = HTTP_MSG_TUNNEL;
- chn->flags |= CF_NEVER_WAIT;
goto wait_other_side;
}
@@ -5048,7 +5054,6 @@ int http_sync_req_state(struct session *s)
if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_TUN) {
channel_auto_read(chn);
txn->req.msg_state = HTTP_MSG_TUNNEL;
- chn->flags |= CF_NEVER_WAIT;
}
}
--
2.4.10

+ 0
- 52
net/haproxy/patches/0005-MINOR-config-extend-the-default-max-hostname-length-.patch View File

@ -1,52 +0,0 @@
From e77015cdc18ab74aba61cdf57de56d06be5c2a4d Mon Sep 17 00:00:00 2001
From: Willy Tarreau <w@1wt.eu>
Date: Wed, 14 Jan 2015 11:48:58 +0100
Subject: [PATCH 05/10] MINOR: config: extend the default max hostname length
to 64 and beyond
Some users reported that the default max hostname length of 32 is too
short in some environments. This patch does two things :
- it relies on the system's max hostname length as found in MAXHOSTNAMELEN
if it is set. This is the most logical thing to do as the system libs
generally present the appropriate value supported by the system. This
value is 64 on Linux and 256 on Solaris, to give a few examples.
- otherwise it defaults to 64
It is still possible to override this value by defining MAX_HOSTNAME_LEN at
build time. After some observation time, this patch may be backported to
1.5 if it does not cause any build issue, as it is harmless and may help
some users.
(cherry picked from commit 75abcb3106e2c27ef983df885558cf94e01f717a)
Cc: Lukas Tribus <luky-37@hotmail.com>
Cc: jose.castro.leon@cern.ch
[wt: no issue reported so far and Jose rightfully asked for it in 1.5]
---
include/common/defaults.h | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/common/defaults.h b/include/common/defaults.h
index 0075509..a191b8a 100644
--- a/include/common/defaults.h
+++ b/include/common/defaults.h
@@ -190,8 +190,12 @@
/* Maximum host name length */
#ifndef MAX_HOSTNAME_LEN
-#define MAX_HOSTNAME_LEN 32
-#endif
+#if MAXHOSTNAMELEN
+#define MAX_HOSTNAME_LEN MAXHOSTNAMELEN
+#else
+#define MAX_HOSTNAME_LEN 64
+#endif // MAXHOSTNAMELEN
+#endif // MAX_HOSTNAME_LEN
/* Maximum health check description length */
#ifndef HCHK_DESC_LEN
--
2.4.10

+ 0
- 49
net/haproxy/patches/0006-BUG-MEDIUM-http-don-t-enable-auto-close-on-the-respo.patch View File

@ -1,49 +0,0 @@
From 3de8e7ab8d9125402cc1a8fb48ee475ee21d7d4c Mon Sep 17 00:00:00 2001
From: Willy Tarreau <w@1wt.eu>
Date: Wed, 25 Nov 2015 20:11:11 +0100
Subject: [PATCH 06/10] BUG/MEDIUM: http: don't enable auto-close on the
response side
There is a bug where "option http-keep-alive" doesn't force a response
to stay in keep-alive if the server sends the FIN along with the response
on the second or subsequent response. The reason is that the auto-close
was forced enabled when recycling the HTTP transaction and it's never
disabled along the response processing chain before the SHUTR gets a
chance to be forwarded to the client side. The MSG_DONE state of the
HTTP response properly disables it but too late.
There's no more reason for enabling auto-close here, because either it
doesn't matter in non-keep-alive modes because the connection is closed,
or it is automatically enabled by process_stream() when it sees there's
no analyser on the stream.
This bug also affects 1.5 so a backport is desired.
(cherry picked from commit 714ea78c9a09fe6a35a1f2d86af8f7fc9abb64d1)
(cherry picked from commit a15091be17f27fcf4e3a84338df1a8b732e396a1)
---
src/proto_http.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/proto_http.c b/src/proto_http.c
index b32e778..5facfbb 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -4946,11 +4946,13 @@ void http_end_txn_clean_session(struct session *s)
s->rep->flags |= CF_EXPECT_MORE;
}
- /* we're removing the analysers, we MUST re-enable events detection */
+ /* we're removing the analysers, we MUST re-enable events detection.
+ * We don't enable close on the response channel since it's either
+ * already closed, or in keep-alive with an idle connection handler.
+ */
channel_auto_read(s->req);
channel_auto_close(s->req);
channel_auto_read(s->rep);
- channel_auto_close(s->rep);
/* we're in keep-alive with an idle connection, monitor it */
si_idle_conn(s->req->cons);
--
2.4.10

+ 0
- 88
net/haproxy/patches/0007-BUG-MEDIUM-stream-fix-half-closed-timeout-handling.patch View File

@ -1,88 +0,0 @@
From 9154bc92ed11c6de75573dec341b6a0ce68bd0eb Mon Sep 17 00:00:00 2001
From: Willy Tarreau <w@1wt.eu>
Date: Wed, 25 Nov 2015 20:17:27 +0100
Subject: [PATCH 07/10] BUG/MEDIUM: stream: fix half-closed timeout handling
client-fin and server-fin are bogus. They are applied on the write
side after a SHUTR was seen. The immediate effect is that sometimes
if a SHUTR was seen after a SHUTW on the same side, the timeout is
enabled again regardless of the fact that the output is already
closed. This results in the timeout event not to be processed and
a busy poll loop to happen until another timeout on the stream gets
rid of it. Note that haproxy continues its job during this, it's just
that it eats all the CPU trying to handle an event that it ignores.
An reproducible case consists in having a client stop reading data from
a server to ensure data remain in the response buffer, then the client
sends a shutdown(write). If abortonclose is enabled on haproxy, the
shutdown is passed to the server side and the server responds with a
SHUTR that cannot immediately be forwarded to the client since the
buffer is full. During this time the event is ignored and the task is
woken again in loops.
It is worth noting that the timeout handling since 1.5 is a bit fragile
and that it might be possible that other similar conditions still exist,
so the timeout handling should be audited regarding this issue.
Many thanks to BaiYang for providing detailed information showing the
problem in action.
This bug also affects 1.5 thus the fix must be backported.
(cherry picked from commit f25b3573d65fd2411c7537b7b0a4817b478df909)
[Note for 1.5, it's in session.c here]
(cherry picked from commit 44e86286159474a52dc74f80d3271504cc6f1550)
---
src/session.c | 16 ----------------
1 file changed, 16 deletions(-)
diff --git a/src/session.c b/src/session.c
index 7520a85..2b2ad78 100644
--- a/src/session.c
+++ b/src/session.c
@@ -2213,10 +2213,6 @@ struct task *process_session(struct task *t)
if (unlikely((s->req->flags & (CF_SHUTW|CF_SHUTW_NOW|CF_AUTO_CLOSE|CF_SHUTR)) ==
(CF_AUTO_CLOSE|CF_SHUTR))) {
channel_shutw_now(s->req);
- if (tick_isset(s->fe->timeout.clientfin)) {
- s->rep->wto = s->fe->timeout.clientfin;
- s->rep->wex = tick_add(now_ms, s->rep->wto);
- }
}
/* shutdown(write) pending */
@@ -2241,10 +2237,6 @@ struct task *process_session(struct task *t)
if (s->req->prod->flags & SI_FL_NOHALF)
s->req->prod->flags |= SI_FL_NOLINGER;
si_shutr(s->req->prod);
- if (tick_isset(s->fe->timeout.clientfin)) {
- s->rep->wto = s->fe->timeout.clientfin;
- s->rep->wex = tick_add(now_ms, s->rep->wto);
- }
}
/* it's possible that an upper layer has requested a connection setup or abort.
@@ -2391,10 +2383,6 @@ struct task *process_session(struct task *t)
if (unlikely((s->rep->flags & (CF_SHUTW|CF_SHUTW_NOW|CF_AUTO_CLOSE|CF_SHUTR)) ==
(CF_AUTO_CLOSE|CF_SHUTR))) {
channel_shutw_now(s->rep);
- if (tick_isset(s->be->timeout.serverfin)) {
- s->req->wto = s->be->timeout.serverfin;
- s->req->wex = tick_add(now_ms, s->req->wto);
- }
}
/* shutdown(write) pending */
@@ -2417,10 +2405,6 @@ struct task *process_session(struct task *t)
if (s->rep->prod->flags & SI_FL_NOHALF)
s->rep->prod->flags |= SI_FL_NOLINGER;
si_shutr(s->rep->prod);
- if (tick_isset(s->be->timeout.serverfin)) {
- s->req->wto = s->be->timeout.serverfin;
- s->req->wex = tick_add(now_ms, s->req->wto);
- }
}
if (s->req->prod->state == SI_ST_DIS || s->req->cons->state == SI_ST_DIS)
--
2.4.10

+ 0
- 36
net/haproxy/patches/0008-BUG-MEDIUM-cli-changing-compression-rate-limiting-mu.patch View File

@ -1,36 +0,0 @@
From 07ccb48add8c8cb0dd8a0f7d3f4994866d0ef32e Mon Sep 17 00:00:00 2001
From: Willy Tarreau <w@1wt.eu>
Date: Thu, 26 Nov 2015 18:32:39 +0100
Subject: [PATCH 08/10] BUG/MEDIUM: cli: changing compression rate-limiting
must require admin level
Right now it's possible to change the global compression rate limiting
without the CLI being at the admin level.
This fix must be backported to 1.6 and 1.5.
(cherry picked from commit a1c2b2c4f3e65d198a0a4b25a4f655f7b307a855)
(cherry picked from commit 9e5f1489c9f2d6926729890f249f7ebb9d3bfd43)
---
src/dumpstats.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/dumpstats.c b/src/dumpstats.c
index b4be2cd..b1bbf31 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -1695,6 +1695,12 @@ static int stats_sock_parse_request(struct stream_interface *si, char *line)
if (strcmp(args[3], "global") == 0) {
int v;
+ if (s->listener->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
if (!*args[4]) {
appctx->ctx.cli.msg = "Expects a maximum input byte rate in kB/s.\n";
appctx->st0 = STAT_CLI_PRINT;
--
2.4.10

+ 0
- 31
net/haproxy/patches/0009-BUILD-freebsd-double-declaration.patch View File

@ -1,31 +0,0 @@
From 97ef6f99b8426ffdc97864fc8bb2d85c87cfdad0 Mon Sep 17 00:00:00 2001
From: Thierry FOURNIER <tfournier@arpalert.org>
Date: Tue, 3 Nov 2015 19:17:37 +0100
Subject: [PATCH 09/10] BUILD: freebsd: double declaration
On freebsd, the macro LIST_PREV already exists in the header file
<sys/queue.h>, and this makes a build error.
This patch removes the macros before declaring it. This ensure
that the error doesn't occurs.
(cherry picked from commit 1db96672c4cd264ebca8197bec93a5ce1b23aaa9)
(cherry picked from commit 6cf9c6b270e57f05abf72cd61f4facb5b6980d57)
---
include/common/mini-clist.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/common/mini-clist.h b/include/common/mini-clist.h
index 3c3f001..404b6fa 100644
--- a/include/common/mini-clist.h
+++ b/include/common/mini-clist.h
@@ -144,6 +144,7 @@ struct cond_wordlist {
* which contains list head <lh>, which is known as element <el> in
* struct pt.
*/
+#undef LIST_PREV
#define LIST_PREV(lh, pt, el) (LIST_ELEM((lh)->p, pt, el))
/*
--
2.4.10

+ 0
- 53
net/haproxy/patches/0010-BUG-MEDIUM-sample-urlp-can-t-match-an-empty-value.patch View File

@ -1,53 +0,0 @@
From 0f836e1361933721c5689c7943143fd6cd260148 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Cyril=20Bont=C3=A9?= <cyril.bonte@free.fr>
Date: Thu, 26 Nov 2015 21:39:56 +0100
Subject: [PATCH 10/10] BUG/MEDIUM: sample: urlp can't match an empty value
Currently urlp fetching samples were able to find parameters with an empty
value, but the return code depended on the value length. The final result was
that acls using urlp couldn't match empty values.
Example of acl which always returned "false":
acl MATCH_EMPTY urlp(foo) -m len 0
The fix consists in unconditionally return 1 when the parameter is found.
This fix must be backported to 1.6 and 1.5.
(cherry picked from commit ce1ef4df0135f9dc1cb6691395eacb487015fe3e)
(cherry picked from commit 6bd426cf35c95985712369ed528c10a5f80ad8fd)
[ note: in 1.5 we have value+value_l instead of vstart+vend ]
---
src/proto_http.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/src/proto_http.c b/src/proto_http.c
index 5facfbb..3af7880 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -11050,9 +11050,11 @@ find_url_param_pos(char* query_string, size_t query_string_l,
}
/*
- * Given a url parameter name, returns its value and size into *value and
- * *value_l respectively, and returns non-zero. If the parameter is not found,
- * zero is returned and value/value_l are not touched.
+ * Given a url parameter name and a query string, find the next value.
+ * An empty url_param_name matches the first available parameter.
+ * If the parameter is found, 1 is returned and *value / *value_l are updated
+ * to respectively provide a pointer to the value and its length.
+ * Otherwise, 0 is returned and value/value_l are not modified.
*/
static int
find_url_param_value(char* path, size_t path_l,
@@ -11082,7 +11084,7 @@ find_url_param_value(char* path, size_t path_l,
*value = value_start;
*value_l = value_end - value_start;
- return value_end != value_start;
+ return 1;
}
static int
--
2.4.10

+ 0
- 31
net/haproxy/patches/0011-BUG-MEDIUM-peers-table-entries-learned-from-a-remote.patch View File

@ -1,31 +0,0 @@
From 96a1b4a969a5f3c9224d786c79e90d15a47094b0 Mon Sep 17 00:00:00 2001
From: Emeric Brun <ebrun@haproxy.com>
Date: Wed, 16 Dec 2015 15:16:46 +0100
Subject: [PATCH 11/13] BUG/MEDIUM: peers: table entries learned from a remote
are pushed to others after a random delay.
New sticktable entries learned from a remote peer can be pushed to others after
a random delay because they are not inserted at the right position in the updates
tree.
(cherry picked from commit 234fc3c31e751f8191b9b78fa5fd16663c2627fe)
(cherry picked from commit 8b1a697362977b8392caca3efaf97a5a8a8c782b)
---
src/peers.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/peers.c b/src/peers.c
index 0564d3d..92b4df0 100644
--- a/src/peers.c
+++ b/src/peers.c
@@ -720,7 +720,7 @@ switchstate:
ts = stktable_store(ps->table->table, newts, 0);
newts = NULL; /* don't reuse it */
- ts->upd.key= (++ps->table->table->update)+(2^31);
+ ts->upd.key= (++ps->table->table->update)+(2147483648U);
eb = eb32_insert(&ps->table->table->updates, &ts->upd);
if (eb != &ts->upd) {
eb32_delete(eb);
--
2.4.10

+ 0
- 28
net/haproxy/patches/0012-BUG-MEDIUM-peers-old-stick-table-updates-could-be-re.patch View File

@ -1,28 +0,0 @@
From a320fd146f802a851a396b2cde491711a4fb87cf Mon Sep 17 00:00:00 2001
From: Emeric Brun <ebrun@haproxy.com>
Date: Wed, 16 Dec 2015 15:28:12 +0100
Subject: [PATCH 12/13] BUG/MEDIUM: peers: old stick table updates could be
repushed.
Because the stick table updates tree was not properly initialized to EB_ROOT_UNIQUE.
(cherry picked from commit 1c6235dbba0a67bad1d5e57ada88f28e1270a5cb)
(cherry picked from commit 6e80935a77c8c2c67a982780a0f14c241f02f2aa)
---
src/stick_table.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/stick_table.c b/src/stick_table.c
index 48d5710..6310690 100644
--- a/src/stick_table.c
+++ b/src/stick_table.c
@@ -385,6 +385,7 @@ int stktable_init(struct stktable *t)
if (t->size) {
memset(&t->keys, 0, sizeof(t->keys));
memset(&t->exps, 0, sizeof(t->exps));
+ t->updates = EB_ROOT_UNIQUE;
t->pool = create_pool("sticktables", sizeof(struct stksess) + t->data_size + t->key_size, MEM_F_SHARED);
--
2.4.10

+ 0
- 43
net/haproxy/patches/0013-CLEANUP-haproxy-using-_GNU_SOURCE-instead-of-__USE_G.patch View File

@ -1,43 +0,0 @@
From 21fab69d332bfafd0a214ee29d8ad0779a055988 Mon Sep 17 00:00:00 2001
From: David Carlier <devnexen@gmail.com>
Date: Tue, 8 Dec 2015 21:43:09 +0000
Subject: [PATCH 13/13] CLEANUP: haproxy: using _GNU_SOURCE instead of
__USE_GNU macro.
In order to properly enable sched_setaffinity, in some versions of Linux,
it is rather _GNU_SOURCE than __USE_GNU (spotted on Alpine Linux for instance),
also for the sake of consistency as __USE_GNU seems not used across the code and
for last, it seems on Linux it is the best way to enable non portable code.
On Linux glibc's based versions, it seems _GNU_SOURCE defines __USE_GNU
it should be safe enough.
(cherry picked from commit 7ece096767d329d0ea04b70a1fb2c8b8a96b47e0)
(cherry picked from commit 5a0ac35503f88a7bc8ee2c4f865354fa6cc25901)
---
src/haproxy.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/haproxy.c b/src/haproxy.c
index b94252d..20480a1 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -25,6 +25,7 @@
*
*/
+#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
@@ -47,9 +48,7 @@
#include <syslog.h>
#include <grp.h>
#ifdef USE_CPU_AFFINITY
-#define __USE_GNU
#include <sched.h>
-#undef __USE_GNU
#endif
#ifdef DEBUG_FULL
--
2.4.10

Loading…
Cancel
Save