[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.00.1205070951170.1544@eggly.anvils>
Date: Mon, 7 May 2012 10:19:09 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Tejun Heo <tj@...nel.org>
cc: Stephen Boyd <sboyd@...eaurora.org>,
Yong Zhang <yong.zhang0@...il.com>,
linux-kernel@...r.kernel.org
Subject: linux-next oops in __lock_acquire for process_one_work
Hi Tejun,
Running MM load on recent linux-nexts (e.g. 3.4.0-rc5-next-20120504),
with CONFIG_PROVE_LOCKING=y, I've been hitting an oops in __lock_acquire
called from lock_acquire called from process_one_work: serving mm/swap.c's
lru_add_drain_all - schedule_on_each_cpu(lru_add_drain_per_cpu).
In each case the oopsing address has been ffffffff00000198, and the
oopsing instruction is the "atomic_inc((atomic_t *)&class->ops)" in
__lock_acquire: so class is ffffffff00000000.
I notice Stephen's commit 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
workqueue: Catch more locking problems with flush_work()
in linux-next but not 3.4-rc, adding
lock_map_acquire(&work->lockdep_map);
lock_map_release(&work->lockdep_map);
to flush_work.
I believe that occasionally races with your
struct lockdep_map lockdep_map = work->lockdep_map;
in process_one_work, putting an entry into the class_cache
just as you're copying it, so you end up with half a pointer.
yes, the structure copy is using "rep movsl" not "rep movsq".
I've reverted Stephen's commit from my testing, and indeed it's
now run that MM load much longer than I've seen since this bug
first appeared. Though I suspect that strictly it's your
unlocked copying of the lockdep_map that's to blame. Probably
easily fixed by someone who understands lockdep - not me!
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists