[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aFyMSoxZ-Iz33fL9@kbusch-mbp>
Date: Wed, 25 Jun 2025 17:54:50 -0600
From: Keith Busch <kbusch@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Jens Axboe <axboe@...nel.dk>, Jiazi Li <jqqlijiazi@...il.com>,
linux-kernel@...r.kernel.org,
"peixuan.qiu" <peixuan.qiu@...nssion.com>, io-uring@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH] stacktrace: do not trace user stack for user_worker tasks
On Wed, Jun 25, 2025 at 06:41:44PM -0400, Steven Rostedt wrote:
> On Wed, 25 Jun 2025 16:30:55 -0600
> Jens Axboe <axboe@...nel.dk> wrote:
>
> > On 6/25/25 2:50 PM, Steven Rostedt wrote:
> > > [
> > > Adding Peter Zijlstra as he has been telling me to test against
> > > PF_KTHREAD instead of current->mm to tell if it is a kernel thread.
> > > But that seems to not be enough!
> > > ]
> >
> > Not sure I follow - if current->mm is NULL, then it's PF_KTHREAD too.
> > Unless it's used kthread_use_mm().
> >
> > PF_USER_WORKER will have current->mm of the user task that it was cloned
> > from.
>
> The suggestion was to use (current->flags & PF_KTHREAD) instead of
> !current->mm to determine if a task is a kernel thread or not as we don't
> want to do user space stack tracing on kernel threads. Peter said that
> because of io threads which have current->mm set, you can't rely on that,
> so check the PF_KHTREAD flag instead. This was assuming that io kthreads
> had that set too, but apparently it does not and we need to check for
> PF_USER_WORKER instead of just PF_KTHREAD.
If you're interested, here's a discussion with Linus on some fallout
with PF_USER_WORKER threads I stumbled upon a few months ago with no
clear longterm resolution:
https://lore.kernel.org/kvm/CAHk-=wg4Wm4x9GoUk6M8BhLsrhLj4+n8jA2Kg8XUQF=kxgNL9g@mail.gmail.com/
That was about userspace problems with these PF_USER_WORKER tasks
spawned with vhost rather than anything in kernel, so it's from the
other side of what you're dealing with here. I'm just mentioning it in
case the improvements your considering could be useful for the userspace
side too.
Powered by blists - more mailing lists