[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110905101607.cd946a46.nishimura@mxp.nes.nec.co.jp>
Date: Mon, 5 Sep 2011 10:16:07 +0900
From: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: "Kirill A. Shutemov" <kirill@...temov.name>,
Andrew Morton <akpm@...ux-foundation.org>,
Balbir Singh <bsingharora@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Subject: Re: [PATCH] memcg: drain all stocks for the cgroup before read
usage
On Mon, 5 Sep 2011 08:59:13 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> On Sun, 4 Sep 2011 04:15:33 +0300
> "Kirill A. Shutemov" <kirill@...temov.name> wrote:
>
> > From: "Kirill A. Shutemov" <kirill@...temov.name>
> >
> > Currently, mem_cgroup_usage() for non-root cgroup returns usage
> > including stocks.
> >
> > Let's drain all socks before read resource counter value. It makes
> > memory{,.memcg}.usage_in_bytes and memory.stat consistent.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill@...temov.name>
>
> Hmm. This seems costly to me.
>
> If a user chesk usage_in_bytes in a memcg once per 1sec,
> the kernel will call schedule_work on cpus once per 1sec.
> So, IMHO, I don't like this.
>
I agree.
We discussed a similar topic on the thread https://lkml.org/lkml/2011/3/18/212.
And, we added the memory.txt:
---
5.5 usage_in_bytes
For efficiency, as other kernel components, memory cgroup uses some optimization
to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the
method and doesn't show 'exact' value of memory(and swap) usage, it's an fuzz
value for efficient access. (Of course, when necessary, it's synchronized.)
If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP)
value in memory.stat(see 5.2).
---
Thanks,
Daisuke Nishimura.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists