[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101006094928.cae0dbf7.nishimura@mxp.nes.nec.co.jp>
Date: Wed, 6 Oct 2010 09:49:28 +0900
From: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
To: Greg Thelen <gthelen@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
containers@...ts.osdl.org, Andrea Righi <arighi@...eler.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Subject: Re: [PATCH 02/10] memcg: document cgroup dirty memory interfaces
On Sun, 3 Oct 2010 23:57:57 -0700
Greg Thelen <gthelen@...gle.com> wrote:
> Document cgroup dirty memory interfaces and statistics.
>
> Signed-off-by: Andrea Righi <arighi@...eler.com>
> Signed-off-by: Greg Thelen <gthelen@...gle.com>
I think you will change "nfs" to "nfs_unstable", but anyway,
Acked-by: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Thanks
Daisuke Nishimura.
> ---
> Documentation/cgroups/memory.txt | 37 +++++++++++++++++++++++++++++++++++++
> 1 files changed, 37 insertions(+), 0 deletions(-)
>
> diff --git a/Documentation/cgroups/memory.txt b/Documentation/cgroups/memory.txt
> index 7781857..eab65e2 100644
> --- a/Documentation/cgroups/memory.txt
> +++ b/Documentation/cgroups/memory.txt
> @@ -385,6 +385,10 @@ mapped_file - # of bytes of mapped file (includes tmpfs/shmem)
> pgpgin - # of pages paged in (equivalent to # of charging events).
> pgpgout - # of pages paged out (equivalent to # of uncharging events).
> swap - # of bytes of swap usage
> +dirty - # of bytes that are waiting to get written back to the disk.
> +writeback - # of bytes that are actively being written back to the disk.
> +nfs - # of bytes sent to the NFS server, but not yet committed to
> + the actual storage.
> inactive_anon - # of bytes of anonymous memory and swap cache memory on
> LRU list.
> active_anon - # of bytes of anonymous and swap cache memory on active
> @@ -453,6 +457,39 @@ memory under it will be reclaimed.
> You can reset failcnt by writing 0 to failcnt file.
> # echo 0 > .../memory.failcnt
>
> +5.5 dirty memory
> +
> +Control the maximum amount of dirty pages a cgroup can have at any given time.
> +
> +Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
> +page cache used by a cgroup. So, in case of multiple cgroup writers, they will
> +not be able to consume more than their designated share of dirty pages and will
> +be forced to perform write-out if they cross that limit.
> +
> +The interface is equivalent to the procfs interface: /proc/sys/vm/dirty_*. It
> +is possible to configure a limit to trigger both a direct writeback or a
> +background writeback performed by per-bdi flusher threads. The root cgroup
> +memory.dirty_* control files are read-only and match the contents of
> +the /proc/sys/vm/dirty_* files.
> +
> +Per-cgroup dirty limits can be set using the following files in the cgroupfs:
> +
> +- memory.dirty_ratio: the amount of dirty memory (expressed as a percentage of
> + cgroup memory) at which a process generating dirty pages will itself start
> + writing out dirty data.
> +
> +- memory.dirty_bytes: the amount of dirty memory (expressed in bytes) in the
> + cgroup at which a process generating dirty pages will start itself writing out
> + dirty data.
> +
> +- memory.dirty_background_ratio: the amount of dirty memory of the cgroup
> + (expressed as a percentage of cgroup memory) at which background writeback
> + kernel threads will start writing out dirty data.
> +
> +- memory.dirty_background_bytes: the amount of dirty memory (expressed in bytes)
> + in the cgroup at which background writeback kernel threads will start writing
> + out dirty data.
> +
> 6. Hierarchy support
>
> The memory controller supports a deep hierarchy and hierarchical accounting.
> --
> 1.7.1
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists