[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091007175112.GI8703@kernel.dk>
Date: Wed, 7 Oct 2009 19:51:12 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Corrado Zoccolo <czoccolo@...il.com>
Cc: Linux-Kernel <linux-kernel@...r.kernel.org>,
Jeff Moyer <jmoyer@...hat.com>
Subject: Re: [PATCH] cfq-iosched: avoid slice overrun when idling
On Wed, Oct 07 2009, Corrado Zoccolo wrote:
> Idle window for a queue is reduced when the queue is about to finish
> its slice.
>
> Signed-off-by: Corrado Zoccolo <czoccolo@...il.com>
> ---
> block/cfq-iosched.c | 4 +++-
> 1 files changed, 3 insertions(+), 1 deletions(-)
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index 4ab33d8..55bb8ca 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -1105,8 +1105,10 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
> * we don't want to idle for seeks, but we do want to allow
> * fair distribution of slice time for a process doing back-to-back
> * seeks. so allow a little bit of time for him to submit a new rq
> + * but avoid overrunning its timeslice
> */
> - sl = cfqd->cfq_slice_idle;
> + sl = min_t(unsigned long, cfqd->cfq_slice_idle,
> + cfqq->slice_end - jiffies);
> if (sample_valid(cic->seek_samples) && CIC_SEEKY(cic))
> sl = min(sl, msecs_to_jiffies(CFQ_MIN_TT));
This was actually done this way on purpose, since shorter idling more
often don't suceed. So the logic was rather overrun the slice slightly
than wait shortly and just miss the incoming IO.
Of course this will overrun the slice even more, which the above will
also do since it wants to do IO within that time frame too.
So I think we should either leave it as-is, OR simply not arm the idle
timer when it has less than slice_idle time left and immediately select
a new queue.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists