[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d8c01f69-34c0-45cf-a532-83544a3a3efd@redhat.com>
Date: Wed, 19 Feb 2025 20:36:13 -0500
From: Waiman Long <llong@...hat.com>
To: "Masami Hiramatsu (Google)" <mhiramat@...nel.org>,
Waiman Long <llong@...hat.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Boqun Feng <boqun.feng@...il.com>, Joel Granados <joel.granados@...nel.org>,
Anna Schumaker <anna.schumaker@...cle.com>, Lance Yang
<ioworker0@...il.com>, Kent Overstreet <kent.overstreet@...ux.dev>,
Yongliang Gao <leonylgao@...cent.com>, Tomasz Figa <tfiga@...omium.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>, linux-kernel@...r.kernel.org,
Linux Memory Management List <linux-mm@...ck.org>
Subject: Re: [PATCH 1/2] hung_task: Show the blocker task if the task is hung
on mutex
On 2/19/25 5:56 PM, Masami Hiramatsu (Google) wrote:
> On Wed, 19 Feb 2025 17:44:11 -0500
> Waiman Long <llong@...hat.com> wrote:
>
>> On 2/19/25 3:24 PM, Steven Rostedt wrote:
>>> On Wed, 19 Feb 2025 15:18:57 -0500
>>> Waiman Long <llong@...hat.com> wrote:
>>>
>>>> It is tricky to access the mutex_waiter structure which is allocated
>>>> from stack. So another way to work around this issue is to add a new
>>>> blocked_on_mutex field in task_struct to directly point to relevant
>>>> mutex. Yes, that increase the size of task_struct by 8 bytes, but it is
>>>> a pretty large structure anyway. Using READ_ONCE/WRITE_ONCE() to access
>>> And it's been on my TODO list for some time to try to make that structure
>>> smaller again :-/
> I agree to add the field, actually it was my first prototype :)
>
>>>> this field, we don't need to take lock, though taking the wait_lock may
>>>> still be needed to examine other information inside the mutex.
> Do we need to take it just for accessing owner, which is in an atomic?
Right. I forgot it is an atomic_long_t. In that case, no lock should be
needed.
Cheers,
Longman
Powered by blists - more mailing lists