[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.00.1202211140330.1858@eggly.anvils>
Date: Tue, 21 Feb 2012 12:00:08 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Konstantin Khlebnikov <khlebnikov@...nvz.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Johannes Weiner <hannes@...xchg.org>,
Ying Han <yinghan@...gle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/10] mm/memcg: take care over pc->mem_cgroup
On Tue, 21 Feb 2012, Konstantin Khlebnikov wrote:
> >
> > Be particularly careful in compaction's isolate_migratepages() and
> > vmscan's lumpy handling in isolate_lru_pages(): those approach the
> > page by its physical location, and so can encounter pages which
> > would not be found by any logical lookup. For those cases we have
> > to change __isolate_lru_page() slightly: it must leave ClearPageLRU
> > to the caller, because compaction and lumpy cannot safely interfere
> > with a page until they have first isolated it and then locked lruvec.
>
> Yeah, this is most complicated part.
Yes, I found myself leaving this patch until last when commenting.
And was not entirely convinced by what I then said of move_account().
Indeed, I wondered if it might be improved and simplified by taking
lruvec locks itself, in the manner that commit_charge lrucare ends
up doing.
> I found one race here, see below.
Thank you!
> > @@ -386,10 +364,24 @@ static isolate_migrate_t isolate_migrate
> > continue;
> >
> > page_relock_lruvec(page,&lruvec);
>
> Here race with mem_cgroup_move_account() we hold lock for old lruvec,
> while move_account() recharge page and put page back into other lruvec.
> Thus we see PageLRU(), but below we isolate page from wrong lruvec.
I expect you'll prove right on that, but I'm going to think about it
more later.
>
> In my patch-set this is fixed with __wait_lru_unlock() [ spin_unlock_wait() ]
> in mem_cgroup_move_account()
Right now, after finishing mail, I want to concentrate on getting your
series working under my swapping load.
It's possible that I screwed up rediffing it to my slightly later
base (though the only parts that appeared to need fixing up were as
expected, near update_isolated_counts and move_active_pages_to_lru);
but if so I'd expect that to show up differently.
At present, although it runs fine with cgroup_disable=memory, with memcg
two machines very soon hit that BUG at include/linux/mm_inline.h:41!
when the lru_size or pages_count wraps around; on another it hit that
precisely when I stopped the test.
In all cases it's in release_pages from free_pages_and_swap_cache from
tlb_flush_mmu from tlb_finish_mmu from exit_mmap from mmput from exit_mm
from do_exit (but different processes exiting).
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists