- <p>Fix a logic bug that could cause Privoxy to reuse a tainted
- server socket. It could happen for server sockets that got
- tainted by a server-header-tagger-induced block, in which case
- Privoxy doesn't necessarily read the whole server response. If
- keep-alive was enabled and the request following the blocked one
- was to the same host and using the same forwarding settings,
- Privoxy would send it on the tainted server socket. While the
- server would simply treat it as a pipelined request, Privoxy
- would later on fail to properly parse the server's response as it
- would try to parse the unread data from the first response as
- server headers for the second one. Regression introduced in
- 3.0.17.</p>
- </li>
-
- <li>
- <p>When implying keep-alive in client_connection(), remember that
- the client didn't Fixes a regression introduced in 3.0.13 that
- would cause Privoxy to wait for additional client requests after
- receiving a HTTP/1.1 request with "Connection: close" set and
- connection sharing enabled. With clients like curl which
- terminates the client connection after detecting that the whole
- body has been received it doesn't really matter, but with clients
- like FreeBSD's fetch the client connection would be kept open
- until it timed out.</p>
- </li>
-
- <li>
- <p>Fix a subtle race condition between
- prepare_csp_for_next_request() and sweep() A thread preparing
- itself for the next client request could briefly appear to be
- inactive. If all other threads were already using more recent
- files, the thread could get its files swept away under its feet.
- I've only seen it while stress testing in valgrind while touching
- action files in a loop. It's unlikely to have caused any actual
- problems in the real world.</p>