[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4e5e476b1001111109l34f1398dha2f369cdd525596d@mail.gmail.com>
Date: Mon, 11 Jan 2010 20:09:37 +0100
From: Corrado Zoccolo <czoccolo@...il.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Jeff Garzik <jeff@...zik.org>, Jens Axboe <jens.axboe@...cle.com>,
Linux-Kernel <linux-kernel@...r.kernel.org>,
Jeff Moyer <jmoyer@...hat.com>,
Shaohua Li <shaohua.li@...el.com>,
Gui Jianfeng <guijianfeng@...fujitsu.com>
Subject: Re: [PATCH] cfq-iosched: NCQ SSDs do not need read queue merging
On Mon, Jan 11, 2010 at 6:11 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> On Mon, Jan 11, 2010 at 06:00:51PM +0100, Corrado Zoccolo wrote:
>> On Mon, Jan 11, 2010 at 5:44 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
>> > On Mon, Jan 11, 2010 at 03:53:00PM +0100, Corrado Zoccolo wrote:
>> >> On Mon, Jan 11, 2010 at 2:18 PM, Jeff Garzik <jeff@...zik.org> wrote:
>> >> > On 01/11/2010 08:13 AM, Jens Axboe wrote:
>> >> >>
>> >> >> On Mon, Jan 11 2010, Corrado Zoccolo wrote:
>> >> >>>
>> >> >>> On Mon, Jan 11, 2010 at 12:25 PM, Jeff Garzik<jeff@...zik.org> wrote:
>> >> >>>>
>> >> >>>> On 01/10/2010 04:04 PM, Corrado Zoccolo wrote:
>> >> >>>>>
>> >> >>>>> NCQ SSDs' performances are not affected by
>> >> >>>>> distance of read requests, so there is no point in having
>> >> >>>>> overhead to merge such queues.
>> >> >>>>>
>> >> >>>>> Non-NCQ SSDs showed regression in some special cases, so
>> >> >>>>> they are ruled out by this patch.
>> >> >>>>>
>> >> >>>>> This patch intentionally doesn't affect writes, so
>> >> >>>>> it changes the queued[] field, to be indexed by
>> >> >>>>> READ/WRITE instead of SYNC/ASYNC, and only compute proximity
>> >> >>>>> for queues with WRITE requests.
>> >> >>>>>
>> >> >>>>> Signed-off-by: Corrado Zoccolo<czoccolo@...il.com>
>> >> >>>>
>> >> >>>> That's not really true. Overhead always increases as the total number
>> >> >>>> of
>> >> >>>> ATA commands issued increases.
>> >> >>>
>> >> >>> Jeff Moyer tested the patch on the workload that mostly benefit of
>> >> >>> queue merging, and found that
>> >> >>> the performance was improved by the patch.
>> >> >>> So removing the CPU overhead helps much more than the marginal gain
>> >> >>> given by merging on this hardware.
>> >> >>
>> >> >> It's not always going to be true. On SATA the command overhead is fairly
>> >> >> low, but on other hardware that may not be the case. Unless you are CPU
>> >> >> bound by your IO device, then merging will always be beneficial. I'm a
>> >> >> little behind on emails after my vacation, Jeff what numbers did you
>> >> >> generate and on what hardware?
>> >> >
>> >> > ...and on what workload? "the workload that mostly benefit of queue
>> >> > merging" is highly subjective, and likely does not cover most workloads SSDs
>> >> > will see in the field.
>> >> Hi Jeff,
>> >> exactly.
>> >> The workloads that benefits from queue merging are the ones in which a
>> >> sequential
>> >> read is actually splitt, and carried out by different processes in
>> >> different I/O context, each
>> >> sending requests with strides. This is clearly not the best way of
>> >> doing sequential access
>> >> (I would happily declare those programs as buggy).
>> >> CFQ has code that merges queues in this case. I'm disabling the READ
>> >> part for NCQ SSDs,
>> >> since, as Jeff measured, the code overhead outweight the gain from
>> >> merging (if any).
>> >
>> > Hi Corrado,
>> >
>> > In Jeff's test case of running read-test2, I am not even sure if any
>> > merging between the queues took place or not as on NCQ SSD, we are driving
>> > deeper queue depths and unless read-test2 is creating more than 32
>> > threads, there might not be any merging taking place at all.
>>
>> Jeff's test was modeled after real use cases: widely used, legacy
>> programs like dump.
>> Since we often said that splitting the sequential stream in multiple
>> threads was not the
>> correct approach, and we did introduce the change in the kernel just
>> to support those
>> programs (not to encourage writing more of this league), we can assume
>> that if they
>> do not drive deeper queues, no one will. So the overhead is just
>> overhead, and will never
>> give any benefit.
>
> Two things come to mind.
>
> - Even if dump/read-test2 is not driving deeper queue depths, but other
> competing programs might be driving deeper queue depths, which can give
> queue merging opportunity in case of dump program on NCQ SSD.
Correct. We need a better test to determine if those merges can happen.
>
> - If we are thinking that on NCQ SSD we practically don't have queue
> merging opportunity then there is no point in keeping it enabled for
> WRITES also?
Probably. Needs test here, too.
Corrado
>
> Vivek
>
>>
>> I therefore want to remove it, since for SSD it matters.
>> >
>> > We also don't have any data/numbers what kind of cpu savings does this
>> > patch bring in.
>>
>> Jeff's test showed larger bandwidth with merge disabled, so it implies
>> some saving is present.
>>
>> Thanks,
>> Corrado
>>
>> >
>> > Vivek
>> >
>> >>
>> >> As you said, most workloads don't benefit from queue merging. On those
>> >> ones, the patch
>> >> just removes an overhead.
>> >>
>> >> Thanks,
>> >> Corrado
>> >>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists