[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 5 May 2010 07:34:37 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Mel Gorman <mel@....ul.ie>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Minchan Kim <minchan.kim@...il.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Christoph Lameter <cl@...ux.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing
the wrong VMA information
On Wed, 5 May 2010, Mel Gorman wrote:
>
> With the recent anon_vma changes, there can be more than one anon_vma->lock
> to take in a anon_vma_chain but a second lock cannot be spinned upon in case
> of deadlock. The rmap walker tries to take locks of different anon_vma's
> but if the attempt fails, locks are released and the operation is restarted.
Btw, is this really needed?
Nobody else takes two anon_vma locks at the same time, so in order to
avoid ABBA deadlocks all we need to guarantee is that rmap_walk_ksm() and
rmap_walk_anon() always lock the anon_vma's in the same order.
And they do, as far as I can tell. How could we ever get a deadlock when
we have both cases doing the locking by walking the same_anon_vma list?
list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
So I think the "retry" logic looks unnecessary, and actually opens us up
to a possible livelock bug (imagine a long chain, and heavy page fault
activity elsewhere that ends up locking some anon_vma in the chain, and
just the right behavior that gets us into a lockstep situation), rather
than fixing an ABBA deadlock.
Now, if it's true that somebody else _does_ do nested anon_vma locking,
I'm obviously wrong. But I don't see such usage.
Comments?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists