[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230626154228.GA6798@maniforge>
Date: Mon, 26 Jun 2023 10:42:28 -0500
From: David Vernet <void@...ifault.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: daniel@...earbox.net, andrii@...nel.org, houtao@...weicloud.com,
paulmck@...nel.org, tj@...nel.org, rcu@...r.kernel.org,
netdev@...r.kernel.org, bpf@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH v2 bpf-next 13/13] bpf: Convert bpf_cpumask to
bpf_mem_cache_free_rcu.
On Fri, Jun 23, 2023 at 08:13:33PM -0700, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@...nel.org>
>
> Convert bpf_cpumask to bpf_mem_cache_free_rcu.
>
> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
Acked-by: David Vernet <void@...ifault.com>
LGTM, thanks for cleaning this up. I left one drive-by comment /
observation below, but it's not a blocker for this patch / series.
> ---
> kernel/bpf/cpumask.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/bpf/cpumask.c b/kernel/bpf/cpumask.c
> index 938a60ff4295..6983af8e093c 100644
> --- a/kernel/bpf/cpumask.c
> +++ b/kernel/bpf/cpumask.c
> @@ -9,7 +9,6 @@
> /**
> * struct bpf_cpumask - refcounted BPF cpumask wrapper structure
> * @cpumask: The actual cpumask embedded in the struct.
> - * @rcu: The RCU head used to free the cpumask with RCU safety.
> * @usage: Object reference counter. When the refcount goes to 0, the
> * memory is released back to the BPF allocator, which provides
> * RCU safety.
> @@ -25,7 +24,6 @@
> */
> struct bpf_cpumask {
> cpumask_t cpumask;
> - struct rcu_head rcu;
> refcount_t usage;
> };
>
> @@ -82,16 +80,6 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask)
> return cpumask;
> }
>
> -static void cpumask_free_cb(struct rcu_head *head)
> -{
> - struct bpf_cpumask *cpumask;
> -
> - cpumask = container_of(head, struct bpf_cpumask, rcu);
> - migrate_disable();
> - bpf_mem_cache_free(&bpf_cpumask_ma, cpumask);
> - migrate_enable();
> -}
> -
> /**
> * bpf_cpumask_release() - Release a previously acquired BPF cpumask.
> * @cpumask: The cpumask being released.
> @@ -102,8 +90,12 @@ static void cpumask_free_cb(struct rcu_head *head)
> */
> __bpf_kfunc void bpf_cpumask_release(struct bpf_cpumask *cpumask)
> {
> - if (refcount_dec_and_test(&cpumask->usage))
> - call_rcu(&cpumask->rcu, cpumask_free_cb);
> + if (!refcount_dec_and_test(&cpumask->usage))
> + return;
> +
> + migrate_disable();
> + bpf_mem_cache_free_rcu(&bpf_cpumask_ma, cpumask);
> + migrate_enable();
The fact that callers have to disable migration like this in order to
safely free the memory feels a bit leaky. Is there any reason we can't
move this into bpf_mem_{cache_}free_rcu()?
Powered by blists - more mailing lists