Browse Source

haproxy: Update all patches for HAProxy v1.8.8

- Add new patches (see https://www.haproxy.org/bugs/bugs-1.8.8.html)
- Raise patch-level to 04

Signed-off-by: Christian Lachner <gladiac@gmail.com>
lilik-openwrt-22.03
Christian Lachner 6 years ago
parent
commit
e5a860634b
3 changed files with 109 additions and 1 deletions
  1. +1
    -1
      net/haproxy/Makefile
  2. +38
    -0
      net/haproxy/patches/0015-BUG-MINOR-lua-ensure-large-proxy-IDs-can-be-represented.patch
  3. +70
    -0
      net/haproxy/patches/0016-BUG-MEDIUM-http-dont-always-abort-transfers-on-CF_SHUTR.patch

+ 1
- 1
net/haproxy/Makefile View File

@ -10,7 +10,7 @@ include $(TOPDIR)/rules.mk
PKG_NAME:=haproxy
PKG_VERSION:=1.8.8
PKG_RELEASE:=03
PKG_RELEASE:=04
PKG_SOURCE:=haproxy-$(PKG_VERSION).tar.gz
PKG_SOURCE_URL:=https://www.haproxy.org/download/1.8/src/


+ 38
- 0
net/haproxy/patches/0015-BUG-MINOR-lua-ensure-large-proxy-IDs-can-be-represented.patch View File

@ -0,0 +1,38 @@
commit edb4427ab7c070a16cb9a23460f68b3fc3c041bb
Author: Willy Tarreau <w@1wt.eu>
Date: Sun May 6 14:50:09 2018 +0200
BUG/MINOR: lua: ensure large proxy IDs can be represented
In function hlua_fcn_new_proxy() too small a buffer was passed to
snprintf(), resulting in large proxy or listener IDs to make
snprintf() fail. It is unlikely to meet this case but let's fix it
anyway.
This fix must be backported to all stable branches where it applies.
(cherry picked from commit 29d698040d6bb56b29c036aeba05f0d52d8ce94b)
Signed-off-by: Willy Tarreau <w@1wt.eu>
diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
index a8d53d45..1df08f85 100644
--- a/src/hlua_fcn.c
+++ b/src/hlua_fcn.c
@@ -796,7 +796,7 @@ int hlua_fcn_new_proxy(lua_State *L, struct proxy *px)
struct server *srv;
struct listener *lst;
int lid;
- char buffer[10];
+ char buffer[17];
lua_newtable(L);
@@ -836,7 +836,7 @@ int hlua_fcn_new_proxy(lua_State *L, struct proxy *px)
if (lst->name)
lua_pushstring(L, lst->name);
else {
- snprintf(buffer, 10, "sock-%d", lid);
+ snprintf(buffer, sizeof(buffer), "sock-%d", lid);
lid++;
lua_pushstring(L, buffer);
}

+ 70
- 0
net/haproxy/patches/0016-BUG-MEDIUM-http-dont-always-abort-transfers-on-CF_SHUTR.patch View File

@ -0,0 +1,70 @@
commit 1c10e5b1b95142bb3ac385be1e60d8b180b2e99e
Author: Willy Tarreau <w@1wt.eu>
Date: Wed May 16 11:35:05 2018 +0200
BUG/MEDIUM: http: don't always abort transfers on CF_SHUTR
Pawel Karoluk reported on Discourse[1] that HTTP/2 breaks url_param.
Christopher managed to track it down to the HTTP_MSGF_WAIT_CONN flag
which is set there to ensure the connection is validated before sending
the headers, as we may need to rewind the stream and hash again upon
redispatch. What happens is that in the forwarding code we refrain
from forwarding when this flag is set and the connection is not yet
established, and for this we go through the missing_data_or_waiting
path. This exit path was initially designed only to wait for data
from the client, so it rightfully checks whether or not the client
has already closed since in that case it must not wait for more data.
But it also has the side effect of aborting such a transfer if the
client has closed after the request, which is exactly what happens
in H2.
A study on the code reveals that this whole combined check should
be revisited : while it used to be true that waiting had the same
error conditions as missing data, it's not true anymore. Some other
corner cases were identified, such as the risk to report a server
close instead of a client timeout when waiting for the client to
read the last chunk of data if the shutr is already present, or
the risk to fail a redispatch when a client uploads some data and
closes before the connection establishes. The compression seems to
be at risk of rare issues there if a write to a full buffer is not
yet possible but a shutr is already queued.
At the moment these risks are extremely unlikely but they do exist,
and their impact is very minor since it mostly concerns an issue not
being optimally handled, and the fixes risk to cause more serious
issues. Thus this patch only focuses on how the HTTP_MSGF_WAIT_CONN
is handled and leaves the rest untouched.
This patch needs to be backported to 1.8, and could be backported to
earlier versions to properly take care of HTTP/1 requests passing via
url_param which are closed immediately after the headers, though this
is unlikely as this behaviour is only exhibited by scripts.
[1] https://discourse.haproxy.org/t/haproxy-1-8-x-url-param-issue-in-http2/2482/13
(cherry picked from commit ba20dfc50161ba705a746d54ebc1a0a45c46beab)
Signed-off-by: Willy Tarreau <w@1wt.eu>
diff --git a/src/proto_http.c b/src/proto_http.c
index 4c18a27c..b384cef1 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -4865,7 +4865,8 @@ int http_request_forward_body(struct stream *s, struct channel *req, int an_bit)
if (!(s->res.flags & CF_READ_ATTACHED)) {
channel_auto_connect(req);
req->flags |= CF_WAKE_CONNECT;
- goto missing_data_or_waiting;
+ channel_dont_close(req); /* don't fail on early shutr */
+ goto waiting;
}
msg->flags &= ~HTTP_MSGF_WAIT_CONN;
}
@@ -4949,6 +4950,7 @@ int http_request_forward_body(struct stream *s, struct channel *req, int an_bit)
goto return_bad_req_stats_ok;
}
+ waiting:
/* waiting for the last bits to leave the buffer */
if (req->flags & CF_SHUTW)
goto aborted_xfer;

Loading…
Cancel
Save