[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALOAHbCdRjq2n37GpeBdorcbxXMDX2vNDLftypViJd5hRTA28A@mail.gmail.com>
Date: Wed, 27 Jun 2018 23:14:30 +0800
From: Yafang Shao <laoar.shao@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Eric Dumazet <edumazet@...gle.com>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with
LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full
On Wed, Jun 27, 2018 at 10:48 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>
> On 06/27/2018 04:50 AM, Yafang Shao wrote:
>> When sk_rmem_alloc is larger than the receive buffer and we can't
>> schedule more memory for it, the skb will be dropped.
>>
>> In above situation, if this skb is put into the ofo queue,
>> LINUX_MIB_TCPOFODROP is incremented to track it,
>> while if this skb is put into the receive queue, there's no record.
>>
>> So LINUX_MIB_TCPOFODROP is replaced with LINUX_MIB_TCPRMEMFULLDROP to track
>> this behavior.
>
>
> Hi Yafang
>
> I do not want to remove TCPOFODrop and mix multiple causes in one single counter.
>
> Please take a look at commit a6df1ae9383697c to have the reasoning.
>
Got it!
What about introduce a new counter, i.e. TCPRcvQFullDrop ?
Thanks
Yafang
Powered by blists - more mailing lists