[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQLRrhyOHGPb1O0Ju=7YVCNexdhwtoJaGYrfU9Vh2cBbgw@mail.gmail.com>
Date: Thu, 13 Feb 2025 09:45:26 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Changwoo Min <changwoo@...lia.com>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>, Eddy Z <eddyz87@...il.com>,
Song Liu <song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>,
John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
Tejun Heo <tj@...nel.org>, Andrea Righi <arighi@...dia.com>, kernel-dev@...lia.com,
bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next] bpf: Add a retry after refilling the free list
when unit_alloc() fails
On Wed, Feb 12, 2025 at 12:49 AM Changwoo Min <changwoo@...lia.com> wrote:
>
> (e.g., bpf_cpumask_create), allocate the additional free entry in an atomic
> manner (atomic = true in alloc_bulk).
...
> + if (unlikely(!llnode && !retry)) {
> + int cpu = smp_processor_id();
> + alloc_bulk(c, 1, cpu_to_node(cpu), true);
This is broken.
Passing atomic doesn't help.
unit_alloc() can be called from any context
including NMI/IRQ/kprobe deeply nested in slab internals.
kmalloc() is not safe from there.
The whole point of bpf_mem_alloc() is to be safe from
unknown context. If we could do kmalloc(GFP_NOWAIT)
everywhere bpf_mem_alloc() would be needed.
But we may do something.
Draining free_by_rcu_ttrace and waiting_for_gp_ttrace can be done,
but will it address your case?
The commit log is too terse to understand what exactly is going on.
Pls share the call stack. What is the allocation size?
How many do you do in a sequence?
Why irq-s are disabled? Isn't this for scx ?
pw-bot: cr
Powered by blists - more mailing lists