[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100301081524.GA28563@sli10-desk.sh.intel.com>
Date: Mon, 1 Mar 2010 16:15:24 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"czoccolo@...il.com" <czoccolo@...il.com>,
"vgoyal@...hat.com" <vgoyal@...hat.com>,
"jmoyer@...hat.com" <jmoyer@...hat.com>,
"guijianfeng@...fujitsu.com" <guijianfeng@...fujitsu.com>
Subject: Re: [PATCH] cfq-iosched: quantum check tweak --resend
On Mon, Mar 01, 2010 at 04:02:34PM +0800, Jens Axboe wrote:
> On Mon, Mar 01 2010, Shaohua Li wrote:
> > Currently a queue can only dispatch up to 4 requests if there are other queues.
> > This isn't optimal, device can handle more requests, for example, AHCI can
> > handle 31 requests. I can understand the limit is for fairness, but we could
> > do a tweak: if the queue still has a lot of slice left, sounds we could
> > ignore the limit. Test shows this boost my workload (two thread randread of
> > a SSD) from 78m/s to 100m/s.
> > Thanks for suggestions from Corrado and Vivek for the patch.
>
> As mentioned before, I think we definitely want to ensure that we drive
> the full queue depth whenever possible. I think your patch is a bit
> dangerous, though. The problematic workload here is a buffered write,
> interleaved with the occasional sync reader. If the sync reader has to
> endure 32 requests every time, latency rises dramatically for him.
the patch still matains a hardlimit for dispatched request. For a async,
the limit is cfq_slice_async/cfq_slice_idle = 5. For sync, the limit is 8.
And we only pipe out such number of requests at the begining of a slice.
For the workload you mentioned here, we only dispatch 1 extra request.
Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists