[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5489FB30-8B09-4F74-9C2B-FF25F4654A3F@u.nus.edu>
Date: Mon, 25 Nov 2024 05:24:05 +0000
From: Ruan Bonan <bonan.ruan@...us.edu>
To: Steven Rostedt <rostedt@...dmis.org>, Alexei Starovoitov
<alexei.starovoitov@...il.com>, Peter Zijlstra <peterz@...radead.org>
CC: Peter Zijlstra <peterz@...radead.org>, "mingo@...hat.com"
<mingo@...hat.com>, "will@...nel.org" <will@...nel.org>, "longman@...hat.com"
<longman@...hat.com>, "boqun.feng@...il.com" <boqun.feng@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kpsingh@...nel.org" <kpsingh@...nel.org>, "mattbobrowski@...gle.com"
<mattbobrowski@...gle.com>, "ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>, "andrii@...nel.org"
<andrii@...nel.org>, "martin.lau@...ux.dev" <martin.lau@...ux.dev>,
"eddyz87@...il.com" <eddyz87@...il.com>, "song@...nel.org" <song@...nel.org>,
"yonghong.song@...ux.dev" <yonghong.song@...ux.dev>,
"john.fastabend@...il.com" <john.fastabend@...il.com>, "sdf@...ichev.me"
<sdf@...ichev.me>, "haoluo@...gle.com" <haoluo@...gle.com>,
"jolsa@...nel.org" <jolsa@...nel.org>, "mhiramat@...nel.org"
<mhiramat@...nel.org>, "mathieu.desnoyers@...icios.com"
<mathieu.desnoyers@...icios.com>, "bpf@...r.kernel.org"
<bpf@...r.kernel.org>, "linux-trace-kernel@...r.kernel.org"
<linux-trace-kernel@...r.kernel.org>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>, Fu Yeqi <e1374359@...us.edu>
Subject: Re: [BUG] possible deadlock in __schedule (with reproducer available)
Hi Alexei, Steven, and Peter,
Thank you for the detailed feedback. I really appreciate it. I understand your point regarding the responsibilities when attaching code to tracepoints and the complexities involved in such contexts. My intent was to highlight a reproducible scenario where this deadlock might occur, rather than to assign blame to the scheduler code itself. Also, I found that there are some similar cases reported, such as https://lore.kernel.org/bpf/611d0b3b-18bd-8564-4c8d-de7522ada0ba@fb.com/T/.
Regarding the bug report, I tried to follow the report routine at https://www.kernel.org/doc/html/v4.19/admin-guide/reporting-bugs.html. However, in this case it is not very clear for me which subsystem solely should be involved in this report based on the local call trace. I apologize for bothering you, and I will try to identify and only involve the directly related subsystem in future bug reports.
From the discussion, it appears that the root cause might involve specific printk or BPF operations in the given context. To clarify and possibly avoid similar issues in the future, are there guidelines or best practices for writing BPF programs/hooks that interact with tracepoints, especially those related to scheduler events, to prevent such deadlocks?
P.S. I found a prior discussion here: https://lore.kernel.org/bpf/CANpmjNPrHv56Wvc_NbwhoGEU1ZnOepWXT2AmDVVjuY=R8n2XQA@mail.gmail.com/T/. However, there are no more updates.
Thanks,
Bonan
On 2024/11/25, 11:45, "Steven Rostedt" <rostedt@...dmis.org <mailto:rostedt@...dmis.org>> wrote:
- External Email -
On Sun, 24 Nov 2024 22:30:45 -0500
Steven Rostedt <rostedt@...dmis.org <mailto:rostedt@...dmis.org>> wrote:
> > > Ack. BPF should not be causing deadlocks by doing code called from
> > > tracepoints.
> >
> > I sense so much BPF love here that it diminishes the ability to read
> > stack traces :)
>
> You know I love BPF ;-) I do recommend it when I feel it's the right
> tool for the job.
BTW, I want to apologize if my email sounded like an attack on BPF.
That wasn't my intention. It was more about Peter's response being
so short, where the submitter may not understand his response. It's not
up to Peter to explain himself. As I said, this isn't his problem.
I figured I would fill in the gap. As I fear with more people using BPF,
when some bug happens when they attach a BPF program somewhere, they
then blame the code that they attached to. If this was titled "Possible
deadlock when attaching BPF program to scheduler" and was sent to the
BPF folks, I would not have any issue with it. But it was sent to the
scheduler maintainers.
We need to teach people that if a bug happens because they attach a BPF
program somewhere, they first notify the BPF folks. Then if it really
ends up being a bug where the BPF program was attached, it should be
the BPF folks that inform that subsystem maintainers. Not the original
submitter.
Cheers,
-- Steve
Powered by blists - more mailing lists