lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <b8efc2ce-8856-2c9b-2a8c-edf2a819ebe5@huawei.com> Date: Fri, 18 Aug 2023 16:46:01 +0800 From: Yunsheng Lin <linyunsheng@...wei.com> To: Ilias Apalodimas <ilias.apalodimas@...aro.org> CC: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>, <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Lorenzo Bianconi <lorenzo@...nel.org>, Alexander Duyck <alexander.duyck@...il.com>, Liang Chen <liangchen.linux@...il.com>, Alexander Lobakin <aleksander.lobakin@...el.com>, Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Eric Dumazet <edumazet@...gle.com>, Jesper Dangaard Brouer <hawk@...nel.org>, <linux-rdma@...r.kernel.org> Subject: Re: [PATCH net-next v6 1/6] page_pool: frag API support for 32-bit arch with 64-bit DMA On 2023/8/17 19:43, Ilias Apalodimas wrote: >>>>>> >>>>>> In order to simplify the driver's work when using frag API >>>>>> this patch allows page_pool_alloc_frag() to call >>>>>> page_pool_alloc_pages() to return pages for those arches. >>>>> >>>>> Do we have any use cases of people needing this? Those architectures >>>>> should be long dead and although we have to support them in the >>>>> kernel, I don't personally see the advantage of adjusting the API to >>>>> do that. Right now we have a very clear separation between allocating >>>>> pages or fragments. Why should we hide a page allocation under a >>>>> frag allocation? A driver writer can simply allocate pages for those >>>>> boards. Am I the only one not seeing a clean win here? >>>> >>>> It is also a part of removing the per page_pool PP_FLAG_PAGE_FRAG flag >>>> in this patchset. >>> >>> Yes, that happens *because* of this patchset. I am not against the >>> change. In fact, I'll have a closer look tomorrow. I am just trying >>> to figure out if we really need it. When the recycling patches were >>> introduced into page pool we had a very specific reason. Due to the >>> XDP verifier we *had* to allocate a packet per page. That was >> >> Did you mean a xdp frame containing a frag page can not be passed to the >> xdp core? >> What is exact reason why the XDP verifier need a packet per page? >> Is there a code block that you can point me to? > > It's been a while since I looked at this, but doesn't __xdp_return() > still sync the entire page if the mem type comes from page_pool? Yes, I checked that too. It is supposed to sync the entire page if the mem type comes from page_pool, as it depend on the last freed frag to do the sync_for_device operation.
Powered by blists - more mailing lists