[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191114222753.03e50613@carbon>
Date: Thu, 14 Nov 2019 22:27:53 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Jonathan Lemon <jonathan.lemon@...il.com>
Cc: <netdev@...r.kernel.org>, <davem@...emloft.net>,
<kernel-team@...com>, <ilias.apalodimas@...aro.org>,
brouer@...hat.com
Subject: Re: [net-next PATCH v2 1/2] page_pool: do not release pool until
inflight == 0.
On Thu, 14 Nov 2019 08:37:14 -0800
Jonathan Lemon <jonathan.lemon@...il.com> wrote:
> The page pool keeps track of the number of pages in flight, and
> it isn't safe to remove the pool until all pages are returned.
>
> Disallow removing the pool until all pages are back, so the pool
> is always available for page producers.
I like this patch.
> Make the page pool responsible for its own delayed destruction
> instead of relying on XDP, so the page pool can be used without
> the xdp memory model.
>
> When all pages are returned, free the pool and notify xdp if the
> pool is registered with the xdp memory system. Have the callback
> perform a table walk since some drivers (cpsw) may share the pool
> among multiple xdp_rxq_info.
>
> Note that the increment of pages_state_release_cnt may result in
> inflight == 0, releasing the pool.
Maybe we can just do the atomic_inc_return trick, and this patch can
itself can go in.
Alternative is to release the pool via RCU, like the xa
structure is freed via RCU (which have a pointer to the pool).
> Fixes: d956a048cd3f ("xdp: force mem allocator removal and periodic warning")
> Signed-off-by: Jonathan Lemon <jonathan.lemon@...il.com>
> ---
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists