[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1568642487.5576.152.camel@lca.pw>
Date: Mon, 16 Sep 2019 10:01:27 -0400
From: Qian Cai <cai@....pw>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
peterz@...radead.org, mingo@...hat.com
Cc: akpm@...ux-foundation.org, tglx@...utronix.de, thgarnie@...gle.com,
tytso@....edu, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, will@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, keescook@...omium.org
Subject: Re: [PATCH] mm/slub: fix a deadlock in shuffle_freelist()
On Mon, 2019-09-16 at 11:03 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-13 12:27:44 [-0400], Qian Cai wrote:
> …
> > Chain exists of:
> > random_write_wait.lock --> &rq->lock --> batched_entropy_u32.lock
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(batched_entropy_u32.lock);
> > lock(&rq->lock);
> > lock(batched_entropy_u32.lock);
> > lock(random_write_wait.lock);
>
> would this deadlock still occur if lockdep knew that
> batched_entropy_u32.lock on CPU0 could be acquired at the same time
> as CPU1 acquired its batched_entropy_u32.lock?
I suppose that might fix it too if it can teach the lockdep the trick, but it
would be better if there is a patch if you have something in mind that could be
tested to make sure.
Powered by blists - more mailing lists