[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120216110408.f35c3448.kamezawa.hiroyu@jp.fujitsu.com>
Date: Thu, 16 Feb 2012 11:04:08 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Konstantin Khlebnikov <khlebnikov@...nvz.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Hugh Dickins <hughd@...gle.com>,
"hannes@...xchg.org" <hannes@...xchg.org>
Subject: Re: [PATCH RFC 00/15] mm: memory book keeping and lru_lock
splitting
On Thu, 16 Feb 2012 02:57:04 +0400
Konstantin Khlebnikov <khlebnikov@...nvz.org> wrote:
> There should be no logic changes in this patchset, this is only tossing bits around.
> [ This patchset is on top some memcg cleanup/rework patches,
> which I sent to linux-mm@ today/yesterday ]
>
> Most of things in this patchset are self-descriptive, so here brief plan:
>
AFAIK, Hugh Dickins said he has per-zone-per-lru-lock and is testing it.
So, please CC him and Johannes, at least.
> * Transmute struct lruvec into struct book. Like real book this struct will
> store set of pages for one zone. It will be working unit for reclaimer code.
> [ If memcg is disabled in config there will only one book embedded into struct zone ]
>
Why you need to add new structure rahter than enhancing lruvec ?
"book" means a binder of pages ?
> * move page-lru counters to struct book
> [ this adds extra overhead in add_page_to_lru_list()/del_page_from_lru_list() for
> non-memcg case, but I believe it will be invisible, only one non-atomic add/sub
> in the same cacheline with lru list ]
>
This seems straightforward.
> * unify inactive_list_is_low_global() and cleanup reclaimer code
> * replace struct mem_cgroup_zone with single pointer to struct book
Hm, ok.
> * optimize page to book translations, move it upper in the call stack,
> replace some struct zone arguments with struct book pointer.
>
a page->book transrater from patch 2/15
+struct book *page_book(struct page *page)
+{
+ struct mem_cgroup_per_zone *mz;
+ struct page_cgroup *pc;
+
+ if (mem_cgroup_disabled())
+ return &page_zone(page)->book;
+
+ pc = lookup_page_cgroup(page);
+ if (!PageCgroupUsed(pc))
+ return &page_zone(page)->book;
+ /* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
+ smp_rmb();
+ mz = mem_cgroup_zoneinfo(pc->mem_cgroup,
+ page_to_nid(page), page_zonenum(page));
+ return &mz->book;
+}
What happens when pc->mem_cgroup is rewritten by move_account() ?
Where is the guard for lockless access of this ?
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists