[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190916195115.g4hj3j3wstofpsdr@linutronix.de>
Date: Mon, 16 Sep 2019 21:51:15 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Qian Cai <cai@....pw>
Cc: peterz@...radead.org, mingo@...hat.com, akpm@...ux-foundation.org,
tglx@...utronix.de, thgarnie@...gle.com, tytso@....edu,
cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
will@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
keescook@...omium.org
Subject: Re: [PATCH] mm/slub: fix a deadlock in shuffle_freelist()
On 2019-09-16 10:01:27 [-0400], Qian Cai wrote:
> On Mon, 2019-09-16 at 11:03 +0200, Sebastian Andrzej Siewior wrote:
> > On 2019-09-13 12:27:44 [-0400], Qian Cai wrote:
> > …
> > > Chain exists of:
> > > random_write_wait.lock --> &rq->lock --> batched_entropy_u32.lock
> > >
> > > Possible unsafe locking scenario:
> > >
> > > CPU0 CPU1
> > > ---- ----
> > > lock(batched_entropy_u32.lock);
> > > lock(&rq->lock);
> > > lock(batched_entropy_u32.lock);
> > > lock(random_write_wait.lock);
> >
> > would this deadlock still occur if lockdep knew that
> > batched_entropy_u32.lock on CPU0 could be acquired at the same time
> > as CPU1 acquired its batched_entropy_u32.lock?
>
> I suppose that might fix it too if it can teach the lockdep the trick, but it
> would be better if there is a patch if you have something in mind that could be
> tested to make sure.
get_random_bytes() is heavier than get_random_int() so I would prefer to
avoid its usage to fix what looks like a false positive report from
lockdep.
But no, I don't have a patch sitting around. A lock in per-CPU memory
could lead to the scenario mentioned above if the lock could be obtained
cross-CPU it just isn't so in that case. So I don't think it is that
simple.
Sebastian
Powered by blists - more mailing lists