[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130503192656.GD6062@redhat.com>
Date: Fri, 3 May 2013 15:26:56 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, lkml <linux-kernel@...r.kernel.org>,
Li Zefan <lizefan@...wei.com>,
containers@...ts.linux-foundation.org,
Cgroups <cgroups@...r.kernel.org>
Subject: Re: [PATCHSET] blk-throttle: implement proper hierarchy support
On Fri, May 03, 2013 at 12:14:18PM -0700, Tejun Heo wrote:
> On Fri, May 03, 2013 at 03:08:23PM -0400, Vivek Goyal wrote:
> > T1 T2 T3 T4 T5 T6 T7
> > parent: b1 b2 b3 b4 b5
> > child: b1 b2 b3 b4 b5
> >
> >
> > So continuity breaks down because application is waiting for previous
> > IO to finish. This forces expiry of existing time slices and new time
> > slice start both in child and parent and penalty keep on increasing.
>
> It's a problem even in flat mode as the "child" above can easily be
> just a process which is throttling itself and it won't be able to get
> the configured bandwidth due to the scheduling bubbles introduced
> whenever new slice is started. Shouldn't be too difficult to get rid
> of, right?
Key thing here is when to start a new slice. Generally when an IO has been
dispatched from a group, we do not expire slice immediately. We kind of
give group some grace period of throtl_slice (100ms). If next IO does not
come with-in that duration, we start a fresh slice upon next IO arrival.
I think similar problem should happen if there are two stacked devices
and both are doing throttling and if delays between 2 IOs are big enough
that it forces expirty of slice on each device.
Atleast for the hiearchy case, we should be able to start a fresh time
slice when child transfer bio to parent. I will write a patch and do
some experiment.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists