[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080515122156.GA11600@gandalf.sssup.it>
Date: Thu, 15 May 2008 14:21:56 +0200
From: Fabio Checconi <fchecconi@...il.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Matthew <jackdachef@...il.com>,
Daniel J Blueman <daniel.blueman@...il.com>,
Kasper Sandberg <lkml@...anurb.dk>,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: performance "regression" in cfq compared to anticipatory, deadline and noop
> From: Jens Axboe <jens.axboe@...cle.com>
> Date: Thu, May 15, 2008 09:01:28AM +0200
>
> I don't think it's 2.6.25 vs 2.6.26-rc2, I can still reproduce some
> request size offsets with the patch. So still fumbling around with this,
> I'll be sending out another test patch when I'm confident it's solved
> the size issue.
>
IMO an interesting thing is how/why anticipatory doesn't show the
issue. The device is not put into ANTIC_WAIT_NEXT if there is no
dispatch returning no requests while the queue is not empty. This
seems to be enough in the reported workloads.
I don't think this behavior is the correct one (it is still racy
WRT merges after breaking anticipation) anyway it should make things
a little bit better. I fear that a complete solution would not
involve only the scheduler.
Introducing the very same behavior in cfq seems to be not so easy
(i.e., start idling only if there was a dispatch round while the
last request was being served) but an approximated version can be
introduced quite easily. The patch below should do that, rescheduling
the dispatch only if necessary; it is not tested at all, just posted
for discussion.
---
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index b399c62..41f1e0e 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -169,6 +169,7 @@ enum cfqq_state_flags {
CFQ_CFQQ_FLAG_queue_new, /* queue never been serviced */
CFQ_CFQQ_FLAG_slice_new, /* no requests dispatched in slice */
CFQ_CFQQ_FLAG_sync, /* synchronous queue */
+ CFQ_CFQQ_FLAG_dispatched, /* empty dispatch while idling */
};
#define CFQ_CFQQ_FNS(name) \
@@ -196,6 +197,7 @@ CFQ_CFQQ_FNS(prio_changed);
CFQ_CFQQ_FNS(queue_new);
CFQ_CFQQ_FNS(slice_new);
CFQ_CFQQ_FNS(sync);
+CFQ_CFQQ_FNS(dispatched);
#undef CFQ_CFQQ_FNS
static void cfq_dispatch_insert(struct request_queue *, struct request *);
@@ -749,6 +751,7 @@ static void __cfq_set_active_queue(struct cfq_data *cfqd,
cfqq->slice_end = 0;
cfq_clear_cfqq_must_alloc_slice(cfqq);
cfq_clear_cfqq_fifo_expire(cfqq);
+ cfq_clear_cfqq_dispatched(cfqq);
cfq_mark_cfqq_slice_new(cfqq);
cfq_clear_cfqq_queue_new(cfqq);
}
@@ -978,6 +981,7 @@ static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd)
*/
if (timer_pending(&cfqd->idle_slice_timer) ||
(cfqq->dispatched && cfq_cfqq_idle_window(cfqq))) {
+ cfq_mark_cfqq_dispatched(cfqq);
cfqq = NULL;
goto keep_queue;
}
@@ -1784,7 +1788,10 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
if (cfq_cfqq_wait_request(cfqq)) {
cfq_mark_cfqq_must_dispatch(cfqq);
del_timer(&cfqd->idle_slice_timer);
- blk_start_queueing(cfqd->queue);
+ if (cfq_cfqq_dispatched(cfqq)) {
+ cfq_clear_cfqq_dispatched(cfqq);
+ cfq_schedule_dispatch(cfqd);
+ }
}
} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists