[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100222084807.fee163f6.kamezawa.hiroyu@jp.fujitsu.com>
Date: Mon, 22 Feb 2010 08:48:07 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Andrea Righi <arighi@...eler.com>
Cc: Balbir Singh <balbir@...ux.vnet.ibm.com>,
Suleiman Souhlal <suleiman@...gle.com>,
Vivek Goyal <vgoyal@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
containers@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH 0/2] memcg: per cgroup dirty limit
On Sun, 21 Feb 2010 16:18:43 +0100
Andrea Righi <arighi@...eler.com> wrote:
> Control the maximum amount of dirty pages a cgroup can have at any given time.
>
> Per cgroup dirty limit is like fixing the max amount of dirty (hard to reclaim)
> page cache used by any cgroup. So, in case of multiple cgroup writers, they
> will not be able to consume more than their designated share of dirty pages and
> will be forced to perform write-out if they cross that limit.
>
> The overall design is the following:
>
> - account dirty pages per cgroup
> - limit the number of dirty pages via memory.dirty_bytes in cgroupfs
> - start to write-out in balance_dirty_pages() when the cgroup or global limit
> is exceeded
>
> This feature is supposed to be strictly connected to any underlying IO
> controller implementation, so we can stop increasing dirty pages in VM layer
> and enforce a write-out before any cgroup will consume the global amount of
> dirty pages defined by the /proc/sys/vm/dirty_ratio|dirty_bytes limit.
>
> TODO:
> - handle the migration of tasks across different cgroups (a page may be set
> dirty when a task runs in a cgroup and cleared after the task is moved to
> another cgroup).
> - provide an appropriate documentation (in Documentation/cgroups/memory.txt)
>
Thank you, this was a long concern in memcg.
Regards,
-Kame
> -Andrea
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists