[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM6PR04MB56395EFFAC06E835A4F67DA9F10F9@AM6PR04MB5639.eurprd04.prod.outlook.com>
Date: Wed, 16 Jun 2021 15:42:28 +0000
From: David Mozes <david.mozes@...k.us>
To: Thomas Gleixner <tglx@...utronix.de>,
Matthew Wilcox <willy@...radead.org>
CC: "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Darren Hart <dvhart@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: futex/call -to plist_for_each_entry_safe with head=NULL
I Will try with the latest 4.19.195 and will see.
Thx
David
-----Original Message-----
From: Thomas Gleixner <tglx@...utronix.de>
Sent: Tuesday, June 15, 2021 6:04 PM
To: Matthew Wilcox <willy@...radead.org>; David Mozes <david.mozes@...k.us>
Cc: linux-fsdevel@...r.kernel.org; Ingo Molnar <mingo@...hat.com>; Peter Zijlstra <peterz@...radead.org>; Darren Hart <dvhart@...radead.org>; linux-kernel@...r.kernel.org
Subject: Re: futex/call -to plist_for_each_entry_safe with head=NULL
On Sun, Jun 13 2021 at 21:04, Matthew Wilcox wrote:
> On Sun, Jun 13, 2021 at 12:24:52PM +0000, David Mozes wrote:
>> Hi *,
>> Under a very high load of io traffic, we got the below BUG trace.
>> We can see that:
>> plist_for_each_entry_safe(this, next, &hb1->chain, list) {
>> if (match_futex (&this->key, &key1))
>>
>> were called with hb1 = NULL at futex_wake_up function.
>> And there is no protection on the code regarding such a scenario.
>>
>> The NULL can be geting from:
>> hb1 = hash_futex(&key1);
Definitely not.
>>
>> How can we protect against such a situation?
>
> Can you reproduce it without loading proprietary modules?
>
> Your analysis doesn't quite make sense:
>
> hb1 = hash_futex(&key1);
> hb2 = hash_futex(&key2);
>
> retry_private:
> double_lock_hb(hb1, hb2);
>
> If hb1 were NULL, then the oops would come earlier, in double_lock_hb().
Sure, but hash_futex() _cannot_ return a NULL pointer ever.
>>
>>
>> This happened in kernel 4.19.149 running on Azure vm
4.19.149 is almost 50 versions behind the latest 4.19.194 stable.
The other question is whether this happens with an less dead kernel as
well.
Thanks,
tglx
Powered by blists - more mailing lists