[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100821004804.GA11030@localhost>
Date: Sat, 21 Aug 2010 08:48:04 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Michael Rubin <mrubin@...gle.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"jack@...e.cz" <jack@...e.cz>, "riel@...hat.com" <riel@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"david@...morbit.com" <david@...morbit.com>,
"npiggin@...nel.dk" <npiggin@...nel.dk>, "hch@....de" <hch@....de>,
"axboe@...nel.dk" <axboe@...nel.dk>
Subject: Re: [PATCH 3/4] writeback: nr_dirtied and nr_entered_writeback in
/proc/vmstat
On Sat, Aug 21, 2010 at 07:51:38AM +0800, Michael Rubin wrote:
> On Fri, Aug 20, 2010 at 3:08 AM, Wu Fengguang <fengguang.wu@...el.com> wrote:
> > How about the names nr_dirty_accumulated and nr_writeback_accumulated?
> > It seems more consistent, for both the interface and code (see below).
> > I'm not really sure though.
>
> Those names don't seem to right to me.
> I admit I like "nr_dirtied" and "nr_cleaned" that seems most
> understood. These numbers also get very big pretty fast so I don't
> think it's hard to infer.
That's fine. I like "nr_cleaned".
> >> In order to track the "cleaned" and "dirtied" counts we added two
> >> vm_stat_items. Per memory node stats have been added also. So we can
> >> see per node granularity:
> >>
> >> # cat /sys/devices/system/node/node20/writebackstat
> >> Node 20 pages_writeback: 0 times
> >> Node 20 pages_dirtied: 0 times
> >
> > I'd prefer the name "vmstat" over "writebackstat", and propose to
> > migrate items from /proc/zoneinfo over time. zoneinfo is a terrible
> > interface for scripting.
>
> I like vmstat also. I can do that.
Thank you.
> > Also, are there meaningful usage of per-node writeback stats?
>
> For us yes. We use fake numa nodes to implement cgroup memory isolation.
> This allows us to see what the writeback behaviour is like per cgroup.
That's sure convenient for you, for now. But it's special use case.
I wonder if you'll still stick to the fake NUMA scenario two years
later -- when memcg grows powerful enough. What do we do then? "Hey
let's rip these counters, their major consumer has dumped them.."
For per-job nr_dirtied, I suspect the per-process write_bytes and
cancelled_write_bytes in /proc/self/io will serve you well.
For per-job nr_cleaned, I suspect the per-zone nr_writeback will be
sufficient for debug purposes (in despite of being a bit different).
> > The numbers are naturally per-bdi ones instead. But if we plan to
> > expose them for each bdi, this patch will need to be implemented
> > vastly differently.
>
> Currently I have no plans to do that.
Peter? :)
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists