[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a5b3f56f-a7f8-4fa5-8cd6-de9c836db2ac@gmail.com>
Date: Thu, 7 Aug 2025 14:24:12 -0700
From: Mohsin Bashir <mohsin.bashr@...il.com>
To: Alexander Duyck <alexander.duyck@...il.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: netdev@...r.kernel.org, kuba@...nel.org, alexanderduyck@...com,
andrew+netdev@...n.ch, davem@...emloft.net, edumazet@...gle.com,
pabeni@...hat.com, horms@...nel.org, vadim.fedorenko@...ux.dev,
jdamato@...tly.com, sdf@...ichev.me, aleksander.lobakin@...el.com,
ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
john.fastabend@...il.com
Subject: Re: [PATCH net-next 5/9] eth: fbnic: Add XDP pass, drop, abort
support
>>>> Hi Mohsin,
>>>>
>>>> I thought we were past the times when we read prog pointer per each
>>>> processed packet and agreed on reading the pointer once per napi loop?
>>>
>>> This is reading the cached pointer from the netdev. Are you saying you
>>> would rather have this as a stack pointer instead? I don't really see
>>> the advantage to making this a once per napi poll session versus just
>>> reading it once per packet.
>>
>> Hi Alex,
>>
>> this is your only reason (at least currently in this patch) to load the
>> cacheline from netdev struct whereas i was just suggesting to piggyback on
>> the fact that bpf prog pointer will not change within single napi loop.
>>
>> it's up to you of course and should be considered as micro-optimization.
>
> The cost for the "extra cacheline" should be nil as from what I can
> tell xdp_prog shares the cacheline with gro_max_size and _rx so in
> either path that cacheline is going to eventually be pulled in anyway
> regardless of what path it goes with.
>
Hi Maciej,
Appreciate your suggestion regarding the micro-optimization. However, at
this time, we are not planning to adopt this change. I am all ears to
any further thoughts or concerns you may have about it.
Powered by blists - more mailing lists