[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080630110241.82bdd5b0.kamezawa.hiroyu@jp.fujitsu.com>
Date: Mon, 30 Jun 2008 11:02:41 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: balbir@...ux.vnet.ibm.com,
Andrew Morton <akpm@...ux-foundation.org>,
YAMAMOTO Takashi <yamamoto@...inux.co.jp>,
Paul Menage <menage@...gle.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [RFC 0/5] Memory controller soft limit introduction (v3)
On Mon, 30 Jun 2008 10:50:06 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> ==
> if (scan_global_lru(sc)) {
> if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
> continue;
> note_zone_scanning_priority(zone, priority);
>
> if (zone_is_all_unreclaimable(zone) &&
> priority != DEF_PRIORITY)
> continue; /* Let kswapd poll it */
> sc->all_unreclaimable = 0;
> } else {
> /*
> * Ignore cpuset limitation here. We just want to reduce
> * # of used pages by us regardless of memory shortage.
> */
> sc->all_unreclaimable = 0;
> mem_cgroup_note_reclaim_priority(sc->mem_cgroup,
> priority);
> }
> ==
>
> First point is (maybe) my mistake. We have to add cpuset hardwall check to memcg
> part. (I will write a patch soon.)
>
I found my comment seems to say some correct thing..
==
/*
* Ignore cpuset limitation here. We just want to reduce
* # of used pages by us regardless of memory shortage.
*/
==
When we handle memory shortage, we'll have to change this mind.
But I can think of another example easily...
==
MemcgA: limit=1G
CpusetX: mem=0
CpusetY: mem=1
taskP = MemcgA+CpusetX
taskQ = MemcgA+CpusetY
==
In this case, we just want to reduce the usage of memory....nonsense ?
Hmm..I should refresh my brain and revisit this later.
Any inputs are welcome.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists