lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191122202059.GA2844@hirez.programming.kicks-ass.net>
Date:   Fri, 22 Nov 2019 21:20:59 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     linux-kernel@...r.kernel.org, linux-tip-commits@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Qian Cai <cai@....pw>, Thomas Gleixner <tglx@...utronix.de>,
        akpm@...ux-foundation.org, cl@...ux.com, keescook@...omium.org,
        penberg@...nel.org, rientjes@...gle.com, thgarnie@...gle.com,
        tytso@....edu, will@...nel.org, Ingo Molnar <mingo@...nel.org>,
        Borislav Petkov <bp@...en8.de>
Subject: Re: [tip: sched/urgent] sched/core: Avoid spurious lock dependencies

On Fri, Nov 22, 2019 at 09:01:22PM +0100, Sebastian Andrzej Siewior wrote:
> On 2019-11-13 10:06:28 [-0000], tip-bot2 for Peter Zijlstra wrote:
> > sched/core: Avoid spurious lock dependencies
> > 
> > While seemingly harmless, __sched_fork() does hrtimer_init(), which,
> > when DEBUG_OBJETS, can end up doing allocations.
> > 
> > This then results in the following lock order:
> > 
> >   rq->lock
> >     zone->lock.rlock
> >       batched_entropy_u64.lock
> > 
> > Which in turn causes deadlocks when we do wakeups while holding that
> > batched_entropy lock -- as the random code does.
> 
> Peter, can it _really_ cause deadlocks? My understanding was that the
> batched_entropy_u64.lock is a per-CPU lock and can _not_ cause a
> deadlock because it can be always acquired on multiple CPUs
> simultaneously (and it is never acquired cross-CPU).
> Lockdep is simply not smart enough to see that and complains about it
> like it would complain about a regular lock in this case.

That part yes. That is, even holding a per-cpu lock you can do a wakeup
to the local cpu and recurse back onto rq->lock.

However I don't think it can actually happen bceause this
is init_idle, and we only ever do that on hotplug, so actually creating
the concurrency required for the deadlock might be tricky.

Still, moving that thing out from under the lock was simple and correct.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ