[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pmuu2q06.ffs@tglx>
Date: Tue, 03 Aug 2021 23:10:49 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [patch 58/63] futex: Prevent requeue_pi() lock nesting issue on RT
On Tue, Aug 03 2021 at 12:07, Peter Zijlstra wrote:
> On Fri, Jul 30, 2021 at 03:51:05PM +0200, Thomas Gleixner wrote:
>> @@ -219,6 +221,10 @@ struct futex_q {
>> struct rt_mutex_waiter *rt_waiter;
>> union futex_key *requeue_pi_key;
>> u32 bitset;
>> + atomic_t requeue_state;
>> +#ifdef CONFIG_PREEMPT_RT
>> + struct rcuwait requeue_wait;
>> +#endif
>> } __randomize_layout;
>>
>> static const struct futex_q futex_q_init = {
>
> Do we want to explicitly initialize requeue_state in futex_q_init? I was
> looking where we reset the state machine and eventually figured it out,
> but I'm thinking something more explicit might help avoid this for the
> next time.
Sure.
Powered by blists - more mailing lists