lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 8 May 2024 16:44:32 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Mina Almasry <almasrymina@...gle.com>,
 Christoph Hellwig <hch@...radead.org>, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
 linux-alpha@...r.kernel.org, linux-mips@...r.kernel.org,
 linux-parisc@...r.kernel.org, sparclinux@...r.kernel.org,
 linux-trace-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
 bpf@...r.kernel.org, linux-kselftest@...r.kernel.org,
 linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org,
 "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
 Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
 Jonathan Corbet <corbet@....net>,
 Richard Henderson <richard.henderson@...aro.org>,
 Ivan Kokshaysky <ink@...assic.park.msu.ru>, Matt Turner
 <mattst88@...il.com>, Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
 "James E.J. Bottomley" <James.Bottomley@...senpartnership.com>,
 Helge Deller <deller@....de>, Andreas Larsson <andreas@...sler.com>,
 Jesper Dangaard Brouer <hawk@...nel.org>,
 Ilias Apalodimas <ilias.apalodimas@...aro.org>,
 Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu
 <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
 Arnd Bergmann <arnd@...db.de>, Alexei Starovoitov <ast@...nel.org>,
 Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
 Martin KaFai Lau <martin.lau@...ux.dev>, Eduard Zingerman
 <eddyz87@...il.com>, Song Liu <song@...nel.org>,
 Yonghong Song <yonghong.song@...ux.dev>,
 John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
 Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>,
 Jiri Olsa <jolsa@...nel.org>, Steffen Klassert
 <steffen.klassert@...unet.com>, Herbert Xu <herbert@...dor.apana.org.au>,
 David Ahern <dsahern@...nel.org>,
 Willem de Bruijn <willemdebruijn.kernel@...il.com>,
 Shuah Khan <shuah@...nel.org>, Sumit Semwal <sumit.semwal@...aro.org>,
 Christian König <christian.koenig@....com>,
 Amritha Nambiar <amritha.nambiar@...el.com>,
 Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
 Alexander Mikhalitsyn <alexander@...alicyn.com>,
 Kaiyuan Zhang <kaiyuanz@...gle.com>, Christian Brauner <brauner@...nel.org>,
 Simon Horman <horms@...nel.org>, David Howells <dhowells@...hat.com>,
 Florian Westphal <fw@...len.de>, Yunsheng Lin <linyunsheng@...wei.com>,
 Kuniyuki Iwashima <kuniyu@...zon.com>, Jens Axboe <axboe@...nel.dk>,
 Arseniy Krasnov <avkrasnov@...utedevices.com>,
 Aleksander Lobakin <aleksander.lobakin@...el.com>,
 Michael Lass <bevan@...co.net>, Jiri Pirko <jiri@...nulli.us>,
 Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
 Lorenzo Bianconi <lorenzo@...nel.org>,
 Richard Gobert <richardbgobert@...il.com>,
 Sridhar Samudrala <sridhar.samudrala@...el.com>,
 Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
 Johannes Berg <johannes.berg@...el.com>, Abel Wu <wuyun.abel@...edance.com>,
 Breno Leitao <leitao@...ian.org>, David Wei <dw@...idwei.uk>,
 Shailend Chand <shailend@...gle.com>,
 Harshitha Ramamurthy <hramamurthy@...gle.com>,
 Shakeel Butt <shakeel.butt@...ux.dev>, Jeroen de Borst
 <jeroendb@...gle.com>, Praveen Kaligineedi <pkaligineedi@...gle.com>
Subject: Re: [RFC PATCH net-next v8 02/14] net: page_pool: create hooks for
 custom page providers

On 5/8/24 15:25, Jason Gunthorpe wrote:
> On Wed, May 08, 2024 at 12:30:07PM +0100, Pavel Begunkov wrote:
> 
>>> I'm not going to pretend to know about page pool details, but dmabuf
>>> is the way to get the bulk of pages into a pool within the net stack's
>>> allocator and keep that bulk properly refcounted while.> An object like
>>> dmabuf is needed for the general case because there are
>>> not going to be per-page references or otherwise available.
>>
>> They are already pinned, memory is owned by the provider, io_uring
>> in this case, and it should not be freed circumventing io_uring,
>> and at this stage calling release_pages() is not such a hassle,
>> especially comparing to introducing an additional object.
> 
> Something needs to co-ordinate when the net stack's allocator is done
> with the bulk of pages and when io_uring and do the final
> put_user_page() to free it. DMABUF is not an unreasonable choice for
> this.

When a page pool dies notifies io_uring via the ->destroy callback.
Vise versa, when io_uring wants to terminate zerocopy, it releases
the interface queue, which kills the page pool, ending in
->destroy again.
  
>>> topic to me, and honestly hacking into the allocator free function
>>> seems a bit weird..
>>
>> Do you also think that DMA_BUF_IOCTL_SYNC is a weird hack, because
>> it "delays free" by pinning the dmabuf object and letting the user
>> read memory instead of copying it? I can find many examples
> 
> It seems to me the flow you want is for the driver to allocate a page,
> put it on a rx ring, process it through the netstack, and deliver it
> to io_uring. io_uring would then sit on the allocation until userspace
> it done and return it back to the netstack allocator.

That's right, with a note that "driver allocating a page" is
not a alloc_page() but grabbing a user page/frag  the user
registered beforehand.

> Hooking the free of the netstack allocator and then defering it seems

FWIW, it's not about page pool's ->release_page, it's a slow
path and in an ideal world wouldn't be called outside of tear
down.

> like a weird and indirect way to get there. Why can't io_uring just be
> the entity that does the final free and not mess with the logic
> allocator?

Then the user has to do a syscall (e.g. via io_uring) to return pages,
and there we'd need to care how to put the pages efficiently, i.e.
hitting the page pool's fast path, e.g. by hoping napi is scheduled and
scheduled for the CPU we're running on, or maybe transferring the pages
to the right CPU first.

Compare it with userspace putting pages into a ring, and the allocator
taking from there when needed without any extra synchronisation and
hassle just because it's a sole consumer.

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ