[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a08493a4-632f-d66e-db26-727c9cf9e6c6@huaweicloud.com>
Date: Mon, 17 Feb 2025 10:19:53 +0800
From: Hou Tao <houtao@...weicloud.com>
To: Changwoo Min <changwoo@...lia.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>, Eddy Z <eddyz87@...il.com>,
Song Liu <song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>,
John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>, Tejun Heo <tj@...nel.org>,
Andrea Righi <arighi@...dia.com>, kernel-dev@...lia.com,
bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next] bpf: Add a retry after refilling the free list
when unit_alloc() fails
Hi,
On 2/17/2025 12:04 AM, Changwoo Min wrote:
> Hello,
>
> > > What is sizeof(struct bpf_cpumask) in your system?
> >
> > In my system, sizeof(struct bpf_cpumask) is 1032.
> It was a wrong number. sizeof(struct bpf_cpumask) is actually 16.
>
> On 25. 2. 16. 00:16, Changwoo Min wrote:
>> Hello,
>>
>> On 25. 2. 15. 12:51, Alexei Starovoitov wrote:
>> > On Fri, Feb 14, 2025 at 1:24 AM Changwoo Min <changwoo@...lia.com>
>> wrote:
>> >>
>> >> Hello Alexei,
>> >>
>> >> Thank you for the comments! I reordered your comments for ease of
>> >> explanation.
>> >>
>> >> On 25. 2. 14. 02:45, Alexei Starovoitov wrote:
>> >>> On Wed, Feb 12, 2025 at 12:49 AM Changwoo Min
>> <changwoo@...lia.com> wrote:
>> >>
>> >>> The commit log is too terse to understand what exactly is going on.
>> >>> Pls share the call stack. What is the allocation size?
>> >>> How many do you do in a sequence?
>> >>
>> >> The symptom is that an scx scheduler (scx_lavd) fails to load on
>> >> an ARM64 platform on its first try. The second try succeeds. In
>> >> the failure case, the kernel spits the following messages:
>> >>
>> >> [ 27.431380] sched_ext: BPF scheduler "lavd" disabled (runtime
>> error)
>> >> [ 27.431396] sched_ext: lavd: ops.init() failed (-12)
>> >> [ 27.431401] scx_ops_enable.isra.0+0x838/0xe48
>> >> [ 27.431413] bpf_scx_reg+0x18/0x30
>> >> [ 27.431418] bpf_struct_ops_link_create+0x144/0x1a0
>> >> [ 27.431427] __sys_bpf+0x1560/0x1f98
>> >> [ 27.431433] __arm64_sys_bpf+0x2c/0x80
>> >> [ 27.431439] do_el0_svc+0x74/0x120
>> >> [ 27.431446] el0_svc+0x80/0xb0
>> >> [ 27.431454] el0t_64_sync_handler+0x120/0x138
>> >> [ 27.431460] el0t_64_sync+0x174/0x178
>> >>
>> >> The ops.init() failed because the 5th bpf_cpumask_create() calls
>> >> failed during the initialization of the BPF scheduler. The exact
>> >> point where bpf_cpumask_create() failed is here [1]. That scx
>> >> scheduler allocates 5 CPU masks to aid its scheduling decision.
>> >
>> > ...
>> >
>> >> In this particular scenario, the IRQ is not disabled. I just
>> >
>> > since irq-s are not disabled the unit_alloc() should have done:
>> > if (cnt < c->low_watermark)
>> > irq_work_raise(c);
>> >
>> > and alloc_bulk() should have started executing after the first
>> > calloc_cpumask(&active_cpumask);
>> > to refill it from 3 to 64
>>
>> Is there any possibility that irq_work is not scheduled right away on
>> aarch64?
It is a IPI. I think its priority is higher than the current process.
>>
>> >
>> > What is sizeof(struct bpf_cpumask) in your system?
>>
>> In my system, sizeof(struct bpf_cpumask) is 1032.
>It was a wrong number. sizeof(struct bpf_cpumask) is actually 16.
It is indeed strange. The former guess is that bpf_cpumask may be
greater than 4K, so the refill in irq work may fail due to memory
fragment, but the allocation size is tiny.
>>
>> >
>> > Something doesn't add up. irq_work_queue() should be
>> > instant when irq-s are not disabled.
>> > This is not IRQ_WORK_LAZY.> Are you running PREEMPT_RT ?
>>
>> No, CONFIG_PREEMPT_RT is not set.
Could you please share the kernel .config file and the kernel version
for the problem ? And if you are running the test in a QEMU, please also
share the command line to run the qemu.
>>
>> Regards,
>> Changwoo Min
>>
>>
>
>
> .
Powered by blists - more mailing lists