[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW5cWzSzFXDgP-hQr7vRfLE7LN2NsE0n7Q659dosfgbhOw@mail.gmail.com>
Date: Sun, 10 Apr 2022 22:19:52 -0700
From: Song Liu <song@...nel.org>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Freysteinn Alfredsson <freysteinn.alfredsson@....se>,
Paolo Abeni <pabeni@...hat.com>,
Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH bpf] bpf: Fix release of page_pool in BPF_PROG_RUN
On Sat, Apr 9, 2022 at 2:31 PM Toke Høiland-Jørgensen <toke@...hat.com> wrote:
>
> The live packet mode in BPF_PROG_RUN allocates a page_pool instance for
> each test run instance and uses it for the packet data. On setup it creates
> the page_pool, and calls xdp_reg_mem_model() to allow pages to be returned
> properly from the XDP data path. However, xdp_reg_mem_model() also raises
> the reference count of the page_pool itself, so the single
> page_pool_destroy() count on teardown was not enough to actually release
> the pool. To fix this, add an additional xdp_unreg_mem_model() call on
> teardown.
>
> Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN")
> Reported-by: Freysteinn Alfredsson <freysteinn.alfredsson@....se>
> Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com>
Acked-by: Song Liu <songliubraving@...com>
> ---
> net/bpf/test_run.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index e7b9c2636d10..af709c182674 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -108,6 +108,7 @@ struct xdp_test_data {
> struct page_pool *pp;
> struct xdp_frame **frames;
> struct sk_buff **skbs;
> + struct xdp_mem_info mem;
> u32 batch_size;
> u32 frame_cnt;
> };
> @@ -147,7 +148,6 @@ static void xdp_test_run_init_page(struct page *page, void *arg)
>
> static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_ctx)
> {
> - struct xdp_mem_info mem = {};
> struct page_pool *pp;
> int err = -ENOMEM;
> struct page_pool_params pp_params = {
> @@ -174,7 +174,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c
> }
>
> /* will copy 'mem.id' into pp->xdp_mem_id */
> - err = xdp_reg_mem_model(&mem, MEM_TYPE_PAGE_POOL, pp);
> + err = xdp_reg_mem_model(&xdp->mem, MEM_TYPE_PAGE_POOL, pp);
> if (err)
> goto err_mmodel;
>
> @@ -202,6 +202,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c
>
> static void xdp_test_run_teardown(struct xdp_test_data *xdp)
> {
> + xdp_unreg_mem_model(&xdp->mem);
> page_pool_destroy(xdp->pp);
> kfree(xdp->frames);
> kfree(xdp->skbs);
> --
> 2.35.1
>
Powered by blists - more mailing lists