[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <BANLkTi=WaXELkPYt_m=zXenMjAxJXC0hyMxDew43Z+jvv2VTxw@mail.gmail.com>
Date: Thu, 9 Jun 2011 15:21:17 -0700
From: Greg Thelen <gthelen@...gle.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
containers@...ts.osdl.org, linux-fsdevel@...r.kernel.org,
Andrea Righi <arighi@...eler.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Minchan Kim <minchan.kim@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Ciju Rajan K <ciju@...ux.vnet.ibm.com>,
David Rientjes <rientjes@...gle.com>,
Wu Fengguang <fengguang.wu@...el.com>,
Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH v8 11/12] writeback: make background writeback cgroup aware
On Thu, Jun 9, 2011 at 2:26 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> On Thu, Jun 09, 2011 at 10:55:40AM -0700, Greg Thelen wrote:
>> On Wed, Jun 8, 2011 at 1:39 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
>> > On Tue, Jun 07, 2011 at 09:02:21PM -0700, Greg Thelen wrote:
>> >
>> > [..]
>> >> > As far as I can say, you should not place programs onto ROOT cgroups if you need
>> >> > performance isolation.
>> >>
>> >> Agreed.
>> >>
>> >> > From the code, I think if the system hits dirty_ratio, "1" bit of bitmap should be
>> >> > set and background writeback can work for ROOT cgroup seamlessly.
>> >> >
>> >> > Thanks,
>> >> > -Kame
>> >>
>> >> Not quite. The proposed patches do not set the "1" bit (css_id of
>> >> root is 1). mem_cgroup_balance_dirty_pages() (from patch 10/12)
>> >> introduces the following balancing loop:
>> >> + /* balance entire ancestry of current's mem. */
>> >> + for (; mem_cgroup_has_dirty_limit(mem); mem =
>> >> parent_mem_cgroup(mem)) {
>> >>
>> >> The loop terminates when mem_cgroup_has_dirty_limit() is called for
>> >> the root cgroup. The bitmap is set in the body of the loop. So the
>> >> root cgroup's bit (bit 1) will never be set in the bitmap. However, I
>> >> think the effect is the same. The proposed changes in this patch
>> >> (11/12) have background writeback first checking if the system is over
>> >> limit and if yes, then b_dirty inodes from any cgroup written. This
>> >> means that a small system background limit with an over-{fg or
>> >> bg}-limit cgroup could cause other cgroups that are not over their
>> >> limit to have their inodes written back. In an system-over-limit
>> >> situation normal system-wide bdi writeback is used (writing inodes in
>> >> b_dirty order). For those who want isolation, a simple rule to avoid
>> >> this is to ensure that that sum of all cgroup background_limits is
>> >> less than the system background limit.
>> >
>> > Ok, we seem to be mixing multiple things.
>> >
>> > - First of all, i thought running apps in root group is very valid
>> > use case. Generally by default we run everything in root group and
>> > once somebody notices that an application or group of application
>> > is memory hog, that can be moved out in a cgroup of its own with
>> > upper limits.
>> >
>> > - Secondly, root starvation issue is not present as long as we fall
>> > back to normal way of writting inodes once we have crossed dirty
>> > limit. But you had suggested that we move cgroup based writeout
>> > above so that we always use same scheme for writeout and that
>> > potentially will have root starvation issue.
>>
>> To reduce the risk of breaking system writeback (by potentially
>> starting root inodes), my preference is to to retain this patch's
>> original ordering (first check and write towards system limits, only
>> if under system limits write per-cgroup).
>>
>> > - If we don't move it up, then atleast it will not work for CFQ IO
>> > controller.
>>
>> As originally proposed, over_bground_thresh() would check system
>> background limit, and if over limit then write b_dirty, until under
>> system limit. Then over_bground_thresh() checks cgroup background
>> limits, and if over limit(s) write over-limit-cgroup inodes until
>> cgroups are under their background limits.
>>
>> How does the order of the checks in over_bground_thresh() affect CFQ
>> IO?
>
> If you are over background limit, you will select inodes independent of
> cgroup they belong to. So it might happen that for a long time you
> select inode only from low prio IO cgroup and that will result in
> pages being written from low prio cgroup (as against to high prio
> cgroup) and low prio group gets to finish its writes earlier. This
> is just reverse of what we wanted from IO controller.
>
> So CFQ IO controller really can't do anything here till inode writeback
> logic is cgroup aware in a way that we are doing round robin among
> dirty cgroups so that most of the time these groups have some IO to
> do at device level.
Thanks for the explanation.
My thinking is that this patch's original proposal (first checking
system limits before cgroup limits is better for CFQ than the reversal
discussed earlier in this thread. By walking the system's inode list
CFQ would be getting inodes in dirtied_when order from a mix of
cgroups rather than just inodes from a particular cgroup. This patch
series introduces a bitmap of cgroups needing writeback but the inodes
for multiple cgroups are still kept in a single bdi b_dirty list.
move_expired_inodes() could be changed to examine the over-limit
cgroup bitmap (over_bground_dirty_thresh) and the CFQ priorities of
found cgroups to develop an inode sort strategy which provides CFQ
with the right set of inodes. Though I would prefer to defer that to
a later patch series.
>> Are you referring to recently proposed block throttle patches,
>> which (AFAIK) throttle the rate at which a cgroup can produce dirty
>> pages as a way to approximate the rate that async dirty pages will be
>> written to disk?
>
> No this is not related to throttling of async writes.
>
> Thanks
> Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists