[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 4 Apr 2012 12:50:48 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Tao Ma <tm@....ma>
Cc: Shaohua Li <shli@...nel.org>, Tejun Heo <tj@...nel.org>,
axboe@...nel.dk, ctalbott@...gle.com, rni@...gle.com,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
containers@...ts.linux-foundation.org
Subject: Re: IOPS based scheduler (Was: Re: [PATCH 18/21] blkcg: move
blkio_group_conf->weight to cfq)
On Thu, Apr 05, 2012 at 12:45:05AM +0800, Tao Ma wrote:
[..]
> > In iops_mode(), expire each cfqq after dispatch of 1 or bunch of requests
> > and you should get the same behavior (with slice_idle=0 and group_idle=0).
> > So why write a new scheduler.
> really? How could we config cfq to work like this? Or you mean we can
> change the code for it?
You can just put a few lines of code to expire queue after 1-2 requests
dispatched from the queue. Than run your workload with slice_idle=0
and group_idle=0 and see what happens.
I don't even know what your workload is.
> >
> > Only thing is that with above, current code will provide iops fairness only
> > for groups. We should be able to tweak queue scheduling to support iops
> > fairness also.
> OK, as I have said in another e-mail another my concern is the
> complexity. It will make cfq too much complicated. I just checked the
> source code of shaohua's original patch, fiops scheduler is only ~700
> lines, so with cgroup support added it would be ~1000 lines I guess.
> Currently cfq-iosched.c is around ~4000 lines even after Tejun's cleanup
> of io context...
I think a large chunk of that iops scheduler code will be borrowed from
CFQ code. All the cgroup logic, queue creation logic, group scheduling
logic etc. And that's the reason I was still exploring the possibility
of having common code base.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists