[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160118183205.GW6357@twins.programming.kicks-ass.net>
Date: Mon, 18 Jan 2016 19:32:05 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Christian Borntraeger <borntraeger@...ibm.com>
Cc: Tejun Heo <tj@...nel.org>,
"linux-kernel@...r.kernel.org >> Linux Kernel Mailing List"
<linux-kernel@...r.kernel.org>,
linux-s390 <linux-s390@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
Oleg Nesterov <oleg@...hat.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: regression 4.4: deadlock in with cgroup percpu_rwsem
On Fri, Jan 15, 2016 at 04:13:34PM +0100, Christian Borntraeger wrote:
> > Yes, the deadlock is gone and the system is still running.
> > After some time I had the following WARN in the logs, though.
> > Not sure yet if that is related.
> >
> > [25331.763607] DEBUG_LOCKS_WARN_ON(lock->owner != current)
> > [25331.763630] ------------[ cut here ]------------
> > [25331.763634] WARNING: at kernel/locking/mutex-debug.c:80
> I restarted the test with panic_on_warn. Hopefully I can get a dump to check
> which mutex this was.
Hard to reproduce warnings like this tend to point towards memory
corruption. Someone stepped on the mutex value and tickles the sanity
check.
With lockdep and debugging enabled the mutex gets quite a bit bigger, so
it gets more likely to be hit by 'random' corruption.
The locking in seq_read() seems rather straight forward.
Powered by blists - more mailing lists