lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874b2a66-01db-423a-fe0f-17643f3db02a@huawei.com>
Date: Wed, 16 Aug 2023 20:42:42 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
CC: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>,
	<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Lorenzo Bianconi
	<lorenzo@...nel.org>, Alexander Duyck <alexander.duyck@...il.com>, Liang Chen
	<liangchen.linux@...il.com>, Alexander Lobakin
	<aleksander.lobakin@...el.com>, Jesper Dangaard Brouer <hawk@...nel.org>,
	Eric Dumazet <edumazet@...gle.com>, Maryam Tahhan <mtahhan@...hat.com>
Subject: Re: [PATCH net-next v7 2/6] page_pool: unify frag_count handling in
 page_pool_is_last_frag()

On 2023/8/16 19:30, Ilias Apalodimas wrote:
> On Wed, 16 Aug 2023 at 13:04, Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>
>> Currently when page_pool_create() is called with
>> PP_FLAG_PAGE_FRAG flag, page_pool_alloc_pages() is only
>> allowed to be called under the below constraints:
>> 1. page_pool_fragment_page() need to be called to setup
>>    page->pp_frag_count immediately.
>> 2. page_pool_defrag_page() often need to be called to drain
>>    the page->pp_frag_count when there is no more user will
>>    be holding on to that page.
>>
>> Those constraints exist in order to support a page to be
>> split into multi frags.
>>
>> And those constraints have some overhead because of the
>> cache line dirtying/bouncing and atomic update.
>>
>> Those constraints are unavoidable for case when we need a
>> page to be split into more than one frag, but there is also
>> case that we want to avoid the above constraints and their
>> overhead when a page can't be split as it can only hold a big
>> frag as requested by user, depending on different use cases:
>> use case 1: allocate page without page splitting.
>> use case 2: allocate page with page splitting.
>> use case 3: allocate page with or without page splitting
>>             depending on the frag size.
>>
>> Currently page pool only provide page_pool_alloc_pages() and
>> page_pool_alloc_frag() API to enable the 1 & 2 separately,
>> so we can not use a combination of 1 & 2 to enable 3, it is
>> not possible yet because of the per page_pool flag
>> PP_FLAG_PAGE_FRAG.
>>
> 
> I really think we are inventing problems to solve here.
> What would be more useful here would be an example with numbers.  Most
> of what you mention are true, but what % of the packets would split a
> page in a way that the remaining part cant be used is unknown.  Do you
> have a usecase in the hns3 driver?  Are there any numbers that justify
> the change?

There is no usecase for the hns3 driver yet.

As mentioned in the cover letter, there are usecases for veth and virtio_net
to use both frag page and non frag page in order to have the best performance
with the least memory depending on the head/tail room space for xdp_frame/shinfo
and mtu/packet size.

For virtio_net case:
I am guessing that allocating the non frag page has some advantage in some
case and allocating frag page has some advantage in other case, that
is why there is module param for the switching between those two mode.
It would be good that Liang Chen can help to confirm the gusessing with some
data.

For veth case:
It would be good that Maryam(cc) can help to do some testing to justify the
change as suggested by Jesper.


For iavf case, Alexander Lobakin mentioned some data in below disscusion:
https://patchwork.kernel.org/project/netdevbpf/patch/20230612130256.4572-5-linyunsheng@huawei.com/#25384716

> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ