[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110419171723.GM31712@redhat.com>
Date: Tue, 19 Apr 2011 13:17:23 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Dave Chinner <david@...morbit.com>
Cc: Jan Kara <jack@...e.cz>, Greg Thelen <gthelen@...gle.com>,
James Bottomley <James.Bottomley@...senpartnership.com>,
lsf@...ts.linux-foundation.org, linux-fsdevel@...r.kernel.org,
linux kernel mailing list <linux-kernel@...r.kernel.org>
Subject: Re: cgroup IO throttling and filesystem ordered mode (Was: Re: [Lsf]
IO less throttling and cgroup aware writeback (Was: Re: Preliminary Agenda
and Activities for LSF))
On Tue, Apr 19, 2011 at 10:30:22AM -0400, Vivek Goyal wrote:
[..]
> >
> > In XFS, you could probably do this at the transaction reservation
> > stage where log space is reserved. We know everything about the
> > transaction at this point in time, and we throttle here already when
> > the journal is full. Adding cgroup transaction limits to this point
> > would be the place to do it, but the control parameter for it would
> > be very XFS specific (i.e. number of transactions/s). Concurrency is
> > not an issue - the XFS transaction subsystem is only limited in
> > concurrency by the space available in the journal for reservations
> > (hundred to thousands of concurrent transactions).
>
> Instead of transaction per second, can we implement some kind of upper
> limit of pending transactions per cgroup. And that limit does not have
> to be user tunable to begin with. The effective transactions/sec rate
> will automatically be determined by IO throttling rate of the cgroup
> at the end nodes.
>
> I think effectively what we need is that the notion of parallel
> transactions so that transactions of one cgroup can make progress
> independent of transactions of other cgroup. So if a process does
> an fsync and it is throttled then it should block transaction of
> only that cgroup and not other cgroups.
>
> You mentioned that concurrency is not an issue in XFS and hundreds of
> thousands of concurrent trasactions can progress depending on log space
> available. If that's the case, I think to begin with we might not have
> to do anything at all. Processes can still get blocked but as long as
> we have enough log space, this might not be a frequent event. I will
> do some testing with XFS and see can I livelock the system with very
> low IO limits.
Wow, XFS seems to be doing pretty good here. I created a group of
1 bytes/sec limit and wrote few bytes in a file and write quit it (vim).
That led to an fsync and process got blocked. From a different cgroup, in the
same directory I seem to be able to do all other regular operations like ls,
opening a new file, editing it etc.
ext4 will lockup immediately. So concurrent transactions do seem to work in
XFS.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists