[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac16cc82-8d98-6a2c-b0a6-7c186808c72c@huawei.com>
Date: Thu, 16 Sep 2021 17:33:39 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
CC: Jesper Dangaard Brouer <jbrouer@...hat.com>, <brouer@...hat.com>,
Alexander Duyck <alexander.duyck@...il.com>,
<davem@...emloft.net>, <kuba@...nel.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linuxarm@...neuler.org>,
<hawk@...nel.org>, <jonathan.lemon@...il.com>, <alobakin@...me>,
<willemb@...gle.com>, <cong.wang@...edance.com>,
<pabeni@...hat.com>, <haokexin@...il.com>, <nogikh@...gle.com>,
<elver@...gle.com>, <memxor@...il.com>, <edumazet@...gle.com>,
<dsahern@...il.com>
Subject: Re: [Linuxarm] Re: [PATCH net-next v2 3/3] skbuff: keep track of pp
page when __skb_frag_ref() is called
On 2021/9/16 16:44, Ilias Apalodimas wrote:
>>>> appear if we try to pull in your patches on using page pool and recycling
>
> [...]
>
>>>> for Tx where TSO and skb_split are used?
>>
>> As my understanding, the problem might exists without tx recycling, because a
>> skb from wire would be passed down to the tcp stack and retransmited back to
>> the wire theoretically. As I am not able to setup a configuration to verify
>> and test it and the handling seems tricky, so I am targetting net-next branch
>> instead of net branch.
>>
>>>>
>>>> I'll be honest, when I came up with the recycling idea for page pool, I
>>>> never intended to support Tx. I agree with Alexander here, If people want
>>>> to use it on Tx and think there's value, we might need to go back to the
>>>> drawing board and see what I've missed. It's still early and there's a
>>>> handful of drivers using it, so it will less painful now.
>>
>> Yes, we also need to prototype it to see if there is something missing in the
>> drawing board and how much improvement we get from that:)
>>
>>>
>>> I agree, page_pool is NOT designed or intended for TX support.
>>> E.g. it doesn't make sense to allocate a page_pool instance per socket, as the backing memory structures for page_pool are too much.
>>> As the number RX-queues are more limited it was deemed okay that we use page_pool per RX-queue, which sacrifice some memory to gain speed.
>>
>> As memtioned before, Tx recycling is based on page_pool instance per socket.
>> it shares the page_pool instance with rx.
>>
>> Anyway, based on feedback from edumazet and dsahern, I am still trying to
>> see if the page pool is meaningful for tx.
>>
>>>
>>>
>>>> The pp_recycle_bit was introduced to make the checking faster, instead of
>>>> getting stuff into cache and check the page signature. If that ends up
>>>> being counterproductive, we could just replace the entire logic with the
>>>> frag count and the page signature, couldn't we? In that case we should be
>>>> very cautious and measure potential regression on the standard path.
>>>
>>> +1
>>
>> I am not sure "pp_recycle_bit was introduced to make the checking faster" is a
>> valid. The size of "struct page" is only about 9 words(36/72 bytes), which is
>> mostly to be in the same cache line, and both standard path and recycle path have
>> been touching the "struct page", so it seems the overhead for checking signature
>> seems minimal.
>>
>> I agree that we need to be cautious and measure potential regression on the
>> standard path.
>
> well pp_recycle is on the same cache line boundary with the head_frag we
> need to decide on recycling. After that we start checking page signatures
> etc, which means the default release path remains mostly unaffected.
>
> I guess what you are saying here, is that 'struct page' is going to be
> accessed eventually by the default network path, so there won't be any
> noticeable performance hit? What about the other usecases we have
Yes.
> for pp_recycle right now? __skb_frag_unref() in skb_shift() or
> skb_try_coalesce() (the latter can probably be removed tbh).
If we decide to go with accurate indicator of a pp page, we just need
to make sure network stack use __skb_frag_unref() and __skb_frag_ref()
to put and get a page frag, the indicator checking need only done in
__skb_frag_unref() and __skb_frag_ref(), so the skb_shift() and
skb_try_coalesce() should be fine too.
>
>>
>> Another way is to use the bit 0 of frag->bv_page ptr to indicate if a frag
>> page is from page pool.
>
> Instead of the 'struct page' signature? And the pp_recycle bit will
> continue to exist?
pp_recycle bit might only exist or is only used for the head page for the skb.
The bit 0 of frag->bv_page ptr can be used to indicate a frag page uniquely.
Doing a memcpying of shinfo or "*fragto = *fragfrom" automatically pass the
indicator to the new shinfo before doing a __skb_frag_ref(), and __skb_frag_ref()
will increment the _refcount or pp_frag_count according to the bit 0 of
frag->bv_page.
By the way, I also prototype the above idea, and it seems to work well too.
> .
> Right now the 'naive' explanation on the recycling decision is something like:
>
> if (pp_recycle) <--- recycling bit is set
> (check page signature) <--- signature matches page pool
> (check fragment refcnt) <--- If frags are enabled and is the last consumer
> recycle
>
> If we can proove the performance is unaffected when we eliminate the first if,
> then obviously we should remove it. I'll try running that test here and see,
> but keep in mind I am only testing on an 1GB interface. Any chance we can get
> measurements on a beefier hardware using hns3 ?
Sure, I will try it.
As the kind of performance overhead is small, any performance testcase in mind?
>
>>
>>>
>>>> But in general, I'd be happier if we only had a simple logic in our
>>>> testing for the pages we have to recycle. Debugging and understanding this
>>>> otherwise will end up being a mess.
>>>
>>>
>
> [...]
>
> Regards
> /Ilias
> .
>
Powered by blists - more mailing lists