[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01ac3ff4-4c06-7a6c-13fc-29ca9ed3ad88@gmail.com>
Date: Mon, 30 Sep 2019 17:14:57 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: John Ousterhout <ouster@...stanford.edu>, netdev@...r.kernel.org
Subject: Re: BUG: sk_backlog.len can overestimate
On 9/30/19 4:58 PM, John Ousterhout wrote:
> As of 4.16.10, it appears to me that sk->sk_backlog_len does not
> provide an accurate estimate of backlog length; this reduces the
> usefulness of the "limit" argument to sk_add_backlog.
>
> The problem is that, under heavy load, sk->sk_backlog_len can grow
> arbitrarily large, even though the actual amount of data in the
> backlog is small. This happens because __release_sock doesn't reset
> the backlog length until it gets completely caught up. Under heavy
> load, new packets can be arriving continuously into the backlog
> (which increases sk_backlog.len) while other packets are being
> serviced. This can go on forever, so sk_backlog.len never gets reset
> and it can become arbitrarily large.
Certainly not.
It can not grow arbitrarily large, unless a backport gone wrong maybe.
>
> Because of this, the "limit" argument to sk_add_backlog may not be
> useful, since it could result in packets being discarded even though
> the backlog is not very large.
>
You will have to study git log/history for the details, the limit _is_ useful,
and we reset the limit in __release_sock() only when _safe_.
Assuming you talk about TCP, then I suggest you use a more recent kernel.
linux-5.0 got coalescing in the backlog queue, which helped quite a bit.
Powered by blists - more mailing lists