[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.00.1202211205280.1858@eggly.anvils>
Date: Tue, 21 Feb 2012 12:12:58 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Konstantin Khlebnikov <khlebnikov@...nvz.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Johannes Weiner <hannes@...xchg.org>,
Ying Han <yinghan@...gle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 9/10] mm/memcg: move lru_lock into lruvec
On Tue, 21 Feb 2012, Konstantin Khlebnikov wrote:
>
> On lumpy/compaction isolate you do:
>
> if (!PageLRU(page))
> continue
>
> __isolate_lru_page()
>
> page_relock_rcu_vec()
> rcu_read_lock()
> rcu_dereference()...
> spin_lock()...
> rcu_read_unlock()
>
> You protect page_relock_rcu_vec with switching pointers back to root.
>
> I do:
>
> catch_page_lru()
> rcu_read_lock()
> if (!PageLRU(page))
> return false
> rcu_dereference()...
> spin_lock()...
> rcu_read_unlock()
> if (PageLRU())
> return true
> if true
> __isolate_lru_page()
>
> I protect my catch_page_lruvec() with PageLRU() under single rcu-interval
> with locking.
> Thus my code is better, because it not requires switching pointers back to
> root memcg.
That sounds much better, yes - if it does work reliably.
I'll have to come back to think about your locking later too;
or maybe that's exactly where I need to look, when investigating
the mm_inline.h:41 BUG.
But at first sight, I have to say I'm very suspicious: I've never found
PageLRU a good enough test for whether we need such a lock, because of
races with those pages on percpu lruvec about to be put on the lru.
But maybe once I look closer, I'll find that's handled by your changes
away from lruvec; though I'd have thought the same issue exists,
independent of whether the pending pages are in vector or list.
Hugh
>
> Meanwhile after seeing your patches, I realized that this rcu-protection is
> required only for lock-by-pfn in lumpy/compaction isolation.
> Thus my locking should be simplified and optimized.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists