lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 Jan 2010 14:24:02 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Jeff Garzik <jeff@...zik.org>
Cc:	Corrado Zoccolo <czoccolo@...il.com>,
	Linux-Kernel <linux-kernel@...r.kernel.org>,
	Jeff Moyer <jmoyer@...hat.com>,
	Vivek Goyal <vgoyal@...hat.com>,
	Shaohua Li <shaohua.li@...el.com>,
	Gui Jianfeng <guijianfeng@...fujitsu.com>
Subject: Re: [PATCH] cfq-iosched: NCQ SSDs do not need read queue merging

On Mon, Jan 11 2010, Jeff Garzik wrote:
> On 01/11/2010 08:13 AM, Jens Axboe wrote:
>> On Mon, Jan 11 2010, Corrado Zoccolo wrote:
>>> On Mon, Jan 11, 2010 at 12:25 PM, Jeff Garzik<jeff@...zik.org>  wrote:
>>>> On 01/10/2010 04:04 PM, Corrado Zoccolo wrote:
>>>>>
>>>>> NCQ SSDs' performances are not affected by
>>>>> distance of read requests, so there is no point in having
>>>>> overhead to merge such queues.
>>>>>
>>>>> Non-NCQ SSDs showed regression in some special cases, so
>>>>> they are ruled out by this patch.
>>>>>
>>>>> This patch intentionally doesn't affect writes, so
>>>>> it changes the queued[] field, to be indexed by
>>>>> READ/WRITE instead of SYNC/ASYNC, and only compute proximity
>>>>> for queues with WRITE requests.
>>>>>
>>>>> Signed-off-by: Corrado Zoccolo<czoccolo@...il.com>
>>>>
>>>> That's not really true.  Overhead always increases as the total number of
>>>> ATA commands issued increases.
>>>
>>> Jeff Moyer tested the patch on the workload that mostly benefit of
>>> queue merging, and found that
>>> the performance was improved by the patch.
>>> So removing the CPU overhead helps much more than the marginal gain
>>> given by merging on this hardware.
>>
>> It's not always going to be true. On SATA the command overhead is fairly
>> low, but on other hardware that may not be the case. Unless you are CPU
>> bound by your IO device, then merging will always be beneficial. I'm a
>> little behind on emails after my vacation, Jeff what numbers did you
>> generate and on what hardware?
>
>  ...and on what workload?   "the workload that mostly benefit of queue  
> merging" is highly subjective, and likely does not cover most workloads  
> SSDs will see in the field.

That, too. The queue merging it not exactly cheap, so perhaps we can
work on making that work better as well. I've got some new hardware in
the bag that'll do IOPS in the millions range, so I'll throw some tests
at it too once I get it cabled up.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ