[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a20526b4-677b-0dea-98f5-ec3aa70f95dd@iogearbox.net>
Date: Thu, 15 Apr 2021 22:31:00 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
David Ahern <dsahern@...il.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, bpf@...r.kernel.org,
netdev@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
ast@...nel.org, brouer@...hat.com, song@...nel.org
Subject: Re: [PATCH v2 bpf-next] cpumap: bulk skb using netif_receive_skb_list
On 4/15/21 10:10 PM, Lorenzo Bianconi wrote:
>> On 4/15/21 9:03 AM, Lorenzo Bianconi wrote:
>>>> On 4/15/21 8:05 AM, Daniel Borkmann wrote:
>>> [...]
>>>>>> &stats);
>>>>>
>>>>> Given we stop counting drops with the netif_receive_skb_list(), we
>>>>> should then
>>>>> also remove drops from trace_xdp_cpumap_kthread(), imho, as otherwise it
>>>>> is rather
>>>>> misleading (as in: drops actually happening, but 0 are shown from the
>>>>> tracepoint).
>>>>> Given they are not considered stable API, I would just remove those to
>>>>> make it clear
>>>>> to users that they cannot rely on this counter anymore anyway.
>>>>
>>>> What's the visibility into drops then? Seems like it would be fairly
>>>> easy to have netif_receive_skb_list return number of drops.
>>>
>>> In order to return drops from netif_receive_skb_list() I guess we need to introduce
>>> some extra checks in the hot path. Moreover packet drops are already accounted
>>> in the networking stack and this is currently the only consumer for this info.
>>> Does it worth to do so?
>>
>> right - softnet_stat shows the drop. So the loss here is that the packet
>> is from a cpumap XDP redirect.
>>
>> Better insights into drops is needed, but I guess in this case coming
>> from the cpumap does not really aid into why it is dropped - that is
>> more core to __netif_receive_skb_list_core. I guess this is ok to drop
>> the counter from the tracepoint.
>
> Applying the current patch, drops just counts the number of kmem_cache_alloc_bulk()
> failures. Looking at kmem_cache_alloc_bulk() code, it does not seem to me there any
> failure counters. So I am wondering, is this an important info for the user?
> Is so I guess we can just rename the counter in something more meaningful
> (e.g. skb_alloc_failures).
Right, at min it could be renamed, but I also wonder if cpumap users really run this
tracepoint permanently to check for that ... presumably not, and if there is a temporary
drop due to that when the tracepoint is not enabled you won't see it either. So this
field could probably be dropped and if needed the accounting in cpumap improved in a
different way.
Powered by blists - more mailing lists