lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izPFp_Q1OngcwZDQeo=YD+nnA9vyVqhuaT--+uREEkfujQ@mail.gmail.com>
Date: Wed, 9 Oct 2024 14:00:35 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: David Wei <dw@...idwei.uk>
Cc: io-uring@...r.kernel.org, netdev@...r.kernel.org, 
	Jens Axboe <axboe@...nel.dk>, Pavel Begunkov <asml.silence@...il.com>, 
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, 
	"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, 
	Jesper Dangaard Brouer <hawk@...nel.org>, David Ahern <dsahern@...nel.org>
Subject: Re: [PATCH v1 06/15] net: page_pool: add ->scrub mem provider callback

On Mon, Oct 7, 2024 at 3:16 PM David Wei <dw@...idwei.uk> wrote:
>
> From: Pavel Begunkov <asml.silence@...il.com>
>
> page pool is now waiting for all ppiovs to return before destroying
> itself, and for that to happen the memory provider might need to push
> some buffers, flush caches and so on.
>
> todo: we'll try to get by without it before the final release
>

Is the intention to drop this todo and stick with this patch, or to
move ahead with this patch?

To be honest, I think I read in a follow up patch that you want to
unref all the memory on page_pool_destory, which is not how the
page_pool is used today. Tdoay page_pool_destroy does not reclaim
memory. Changing that may be OK.

But I'm not sure this is generic change that should be put in the
page_pool providers. I don't envision other providers implementing
this. I think they'll be more interested in using the page_pool the
way it's used today.

I would suggest that instead of making this a page_pool provider
thing, to instead have your iouring code listen to a notification that
a new generic notificatino that page_pool is being destroyed or an
rx-queue is being destroyed or something like that, and doing the
scrubbing based on that, maybe?

-- 
Thanks,
Mina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ