[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xr93lixdv0df.fsf@gthelen.mtv.corp.google.com>
Date: Tue, 07 Jun 2011 13:43:08 -0700
From: Greg Thelen <gthelen@...gle.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
containers@...ts.osdl.org, linux-fsdevel@...r.kernel.org,
Andrea Righi <arighi@...eler.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Minchan Kim <minchan.kim@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Ciju Rajan K <ciju@...ux.vnet.ibm.com>,
David Rientjes <rientjes@...gle.com>,
Wu Fengguang <fengguang.wu@...el.com>,
Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH v8 11/12] writeback: make background writeback cgroup aware
Vivek Goyal <vgoyal@...hat.com> writes:
> On Fri, Jun 03, 2011 at 09:12:17AM -0700, Greg Thelen wrote:
>> When the system is under background dirty memory threshold but a cgroup
>> is over its background dirty memory threshold, then only writeback
>> inodes associated with the over-limit cgroup(s).
>>
>
> [..]
>> -static inline bool over_bground_thresh(void)
>> +static inline bool over_bground_thresh(struct bdi_writeback *wb,
>> + struct writeback_control *wbc)
>> {
>> unsigned long background_thresh, dirty_thresh;
>>
>> global_dirty_limits(&background_thresh, &dirty_thresh);
>>
>> - return (global_page_state(NR_FILE_DIRTY) +
>> - global_page_state(NR_UNSTABLE_NFS) > background_thresh);
>> + if (global_page_state(NR_FILE_DIRTY) +
>> + global_page_state(NR_UNSTABLE_NFS) > background_thresh) {
>> + wbc->for_cgroup = 0;
>> + return true;
>> + }
>> +
>> + wbc->for_cgroup = 1;
>> + wbc->shared_inodes = 1;
>> + return mem_cgroups_over_bground_dirty_thresh();
>> }
>
> Hi Greg,
>
> So all the logic of writeout from mem cgroup works only if system is
> below background limit. The moment we cross background limit, looks
> like we will fall back to existing way of writting inodes?
Correct. If the system is over its background limit then the previous
cgroup-unaware background writeback occurs. I think of the system
limits as those of the root cgroup. If the system is over the global
limit than all cgroups are eligible for writeback. In this situation
the current code does not distinguish between cgroups over or under
their dirty background limit.
Vivek Goyal <vgoyal@...hat.com> writes:
> If yes, then from design point of view it is little odd that as long
> as we are below background limit, we share the bdi between different
> cgroups. The moment we are above background limit, we fall back to
> algorithm of sharing the disk among individual inodes and forget
> about memory cgroups. Kind of awkward.
>
> This kind of cgroup writeback I think will atleast not solve the problem
> for CFQ IO controller, as we fall back to old ways of writting back inodes
> the moment we cross dirty ratio.
It might make more sense to reverse the order of the checks in the
proposed over_bground_thresh(): the new version would first check if any
memcg are over limit; assuming none are over limit, then check global
limits. Assuming that the system is over its background limit and some
cgroups are also over their limits, then the over limit cgroups would
first be written possibly getting the system below its limit. Does this
address your concern?
Note: mem_cgroup_balance_dirty_pages() (patch 10/12) will perform
foreground writeback when a memcg is above its dirty limit. This would
offer CFQ multiple tasks issuing IO.
> Also have you done any benchmarking regarding what's the overhead of
> going through say thousands of inodes to find the inode which is eligible
> for writeback from a cgroup? I think Dave Chinner had raised this concern
> in the past.
>
> Thanks
> Vivek
I will collect some performance data measuring the cost of scanning.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists