[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9cc35513-5522-9229-469b-7d691c9790e1@huaweicloud.com>
Date: Mon, 26 Jun 2023 11:30:04 +0800
From: Hou Tao <houtao@...weicloud.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>, daniel@...earbox.net,
andrii@...nel.org, void@...ifault.com, paulmck@...nel.org
Cc: tj@...nel.org, rcu@...r.kernel.org, netdev@...r.kernel.org,
bpf@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH v2 bpf-next 09/13] bpf: Allow reuse from
waiting_for_gp_ttrace list.
Hi,
On 6/24/2023 11:13 AM, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@...nel.org>
>
> alloc_bulk() can reuse elements from free_by_rcu_ttrace.
> Let it reuse from waiting_for_gp_ttrace as well to avoid unnecessary kmalloc().
>
> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
> ---
> kernel/bpf/memalloc.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
> index 692a9a30c1dc..666917c16e87 100644
> --- a/kernel/bpf/memalloc.c
> +++ b/kernel/bpf/memalloc.c
> @@ -203,6 +203,15 @@ static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
> if (i >= cnt)
> return;
>
> + for (; i < cnt; i++) {
> + obj = llist_del_first(&c->waiting_for_gp_ttrace);
After allowing to reuse elements from waiting_for_gp_ttrace, there may
be concurrent llist_del_first() and llist_del_all() as shown below and
llist_del_first() is not safe because the elements freed from free_rcu()
could be reused immediately and head->first may be added back to
c0->waiting_for_gp_ttrace by other process.
// c0
alloc_bulk()
llist_del_first(&c->waiting_for_gp_ttrace)
// c1->tgt = c0
free_rcu()
llist_del_all(&c->waiting_for_gp_ttrace)
> + if (!obj)
> + break;
> + add_obj_to_free_list(c, obj);
> + }
> + if (i >= cnt)
> + return;
> +
> memcg = get_memcg(c);
> old_memcg = set_active_memcg(memcg);
> for (; i < cnt; i++) {
Powered by blists - more mailing lists