[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ca932605-142f-5264-e75e-3b16c6dc1e3d@marvell.com>
Date: Tue, 24 Nov 2020 18:26:29 +0300
From: Igor Russkikh <irusskikh@...vell.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
CC: "Ramsay, Lincoln" <Lincoln.Ramsay@...i.com>,
Florian Westphal <fw@...len.de>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, <netdev@...r.kernel.org>,
"Dmitry Bogdanov [C]" <dbogdanov@...vell.com>
Subject: Re: [EXT] Re: [PATCH v3] aquantia: Remove the build_skb path
>>
>> Since normal layout is 1400 packets - we do use 2K (half page) for each
> packet.
>
> What is 'normal layout is 1400 packets' ? Didn't you mean the 1500 byte
> standard MTU? So this is what you've been trying to tell me - that for
> 1500 byte mtu and 1k HW granularity you need to provide to HW 2k of
> contiguous space, correct?
Thats right.
Sorry for confusion, of course I meant 1500 standard MTU.
>
>> This way we reuse each allocated page for at least two packets (and
> putting
>> skb_shared into the remaining 512b).
>
> I don't think I follow that. I thought that 2k needs to be exclusive for
> HW and now you're saying that for remaining 512 bytes you can do whatever
> you want.
As soon as we've got packet we know its length. IF its less than 2K minus
skb_shared_info - we put that halfpage directly into skb, and placing the tail
for shared_info. This is what fast path is doing now.
> If that's true then I think you can have build_skb support and I don't see
> that 1k granularity as a limitation.
Thats true, but we can't use build_skb exactly because of the reason Ramsay
discovered. We need extra headspace always.
>> I know many customers do consider AQC chips in near embedded
> environments
>> (routers, etc). They really do care about memories. So that could be
> risky.
>
> We have a knob that is controlled by ethtool's priv flag so you can change
> the memory model and pull the build_skb out of the picture. Just FYI.
Priv flags are considered harmful today...
But I agree in general we lack support of driver fastpath tuning.
Like changing page order for large jumbos or page reuse logic knobs.
May be devlink params could be considered for this?
>>> This issue would pop up again if this driver would like to support XDP
>>> where 256 byte headroom will have to be provided.
>>
>> Actually it already popped. Thats one of the reasons I'm delaying with
> xdp
>> patch series for this driver.
>>
>> I think the best tradeoff here would be allocating order 1 or 2 pages
> (i.e. 8K
>> or 16K), and reuse the page for multiple placements of 2K XDP packets:
>>
>> (256+2048)*3 = 6912 (1K overhead for each 3 packets)
>>
>> (256+2048)*7 = 16128 (200b overhead over 7 packets)
>
> And for XDP_PASS you would use build_skb? Then tailroom needs to be
> provided.
For efficient PASS - I think both tail and head room should somehow be
reserved. Then yes, build_skb could be used..
Thanks
Igor
Powered by blists - more mailing lists