[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101006133024.GE4195@balbir.in.ibm.com>
Date: Wed, 6 Oct 2010 19:00:24 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Greg Thelen <gthelen@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
containers@...ts.osdl.org, Andrea Righi <arighi@...eler.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Subject: Re: [PATCH 08/10] memcg: add cgroupfs interface to memcg dirty limits
* Greg Thelen <gthelen@...gle.com> [2010-10-03 23:58:03]:
> Add cgroupfs interface to memcg dirty page limits:
> Direct write-out is controlled with:
> - memory.dirty_ratio
> - memory.dirty_bytes
>
> Background write-out is controlled with:
> - memory.dirty_background_ratio
> - memory.dirty_background_bytes
>
> Signed-off-by: Andrea Righi <arighi@...eler.com>
> Signed-off-by: Greg Thelen <gthelen@...gle.com>
> ---
The added interface is not uniform with the rest of our write
operations. Does the patch below help? I did a quick compile and run
test.
Make writes to memcg dirty tunables more uniform
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
We today support 'M', 'm', 'k', 'K', 'g' and 'G' suffixes for
general memcg writes. This patch provides the same functionality
for dirty tunables.
---
mm/memcontrol.c | 47 +++++++++++++++++++++++++++++++++++++----------
1 files changed, 37 insertions(+), 10 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2d45a0a..3c360e6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4323,6 +4323,41 @@ static u64 mem_cgroup_dirty_read(struct cgroup *cgrp, struct cftype *cft)
}
static int
+mem_cgroup_dirty_write_string(struct cgroup *cont, struct cftype *cft,
+ const char *buffer)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
+ int type = cft->private;
+ int ret = -EINVAL;
+ unsigned long long val;
+
+ if (cgrp->parent == NULL)
+ return ret;
+
+ switch (type) {
+ case MEM_CGROUP_DIRTY_BYTES:
+ /* This function does all necessary parse...reuse it */
+ ret = res_counter_memparse_write_strategy(buffer, &val);
+ if (ret)
+ break;
+ memcg->dirty_param.dirty_bytes = val;
+ memcg->dirty_param.dirty_ratio = 0;
+ break;
+ case MEM_CGROUP_DIRTY_BACKGROUND_BYTES:
+ ret = res_counter_memparse_write_strategy(buffer, &val);
+ if (ret)
+ break;
+ memcg->dirty_param.dirty_background_bytes = val;
+ memcg->dirty_param.dirty_background_ratio = 0;
+ break;
+ default:
+ BUG();
+ break;
+ }
+ return ret;
+}
+
+static int
mem_cgroup_dirty_write(struct cgroup *cgrp, struct cftype *cft, u64 val)
{
struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
@@ -4338,18 +4373,10 @@ mem_cgroup_dirty_write(struct cgroup *cgrp, struct cftype *cft, u64 val)
memcg->dirty_param.dirty_ratio = val;
memcg->dirty_param.dirty_bytes = 0;
break;
- case MEM_CGROUP_DIRTY_BYTES:
- memcg->dirty_param.dirty_bytes = val;
- memcg->dirty_param.dirty_ratio = 0;
- break;
case MEM_CGROUP_DIRTY_BACKGROUND_RATIO:
memcg->dirty_param.dirty_background_ratio = val;
memcg->dirty_param.dirty_background_bytes = 0;
break;
- case MEM_CGROUP_DIRTY_BACKGROUND_BYTES:
- memcg->dirty_param.dirty_background_bytes = val;
- memcg->dirty_param.dirty_background_ratio = 0;
- break;
default:
BUG();
break;
@@ -4429,7 +4456,7 @@ static struct cftype mem_cgroup_files[] = {
{
.name = "dirty_bytes",
.read_u64 = mem_cgroup_dirty_read,
- .write_u64 = mem_cgroup_dirty_write,
+ .write_string = mem_cgroup_dirty_write_string,
.private = MEM_CGROUP_DIRTY_BYTES,
},
{
@@ -4441,7 +4468,7 @@ static struct cftype mem_cgroup_files[] = {
{
.name = "dirty_background_bytes",
.read_u64 = mem_cgroup_dirty_read,
- .write_u64 = mem_cgroup_dirty_write,
+ .write_u64 = mem_cgroup_dirty_write_string,
.private = MEM_CGROUP_DIRTY_BACKGROUND_BYTES,
},
};
--
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists