lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 Mar 2007 17:12:00 -0500
From:	Mark Rustad <mrustad@...il.com>
To:	Jeff Garzik <jeff@...zik.org>
Cc:	Justin Piszcz <jpiszcz@...idpixels.com>,
	linux-kernel@...r.kernel.org,
	IDE/ATA development list <linux-ide@...r.kernel.org>
Subject: Re: Why is NCQ enabled by default by libata? (2.6.20)

On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:

> Mark Rustad wrote:
>> reorder any queued operations. Of course if you really care about  
>> your data, you don't really want to turn write cache on.
>
> That's a gross exaggeration.  FLUSH CACHE and FUA both ensure data  
> integrity as well.
>
> Turning write cache off has always been a performance-killing  
> action on ATA.

Perhaps. Folks I work with would disagree with that, but I am not  
enough of a storage expert to judge. My statement mirrors the  
judgement of folks I work with that know more than I do.

>> Also the controller used can have unfortunate interactions. For  
>> example the Adaptec SAS controller firmware will never issue more  
>> than two queued commands to a SATA drive (even though the firmware  
>> will happily accept more from the driver), so even if an attached  
>> drive is capable of reordering queued commands, its performance is  
>> seriously crippled by not getting more commands queued up. In  
>> addition, some drive firmware seems to try to bunch up queued  
>> command completions which interacts very badly with a controller  
>> that queues up so few commands. In this case turning NCQ off  
>> performs better because the drive knows it can't hold off  
>> completions to reduce interrupt load on the host – a good idea  
>> gone totally wrong when used with the Adaptec controller.
>
> All of that can be fixed with an Adaptec firmware upgrade, so not  
> our problem here, and not a reason to disable NCQ in libata core.

It theoretically could be, but we are using the latest Adaptec  
firmware. Until there exists firmware that fixes it, it remains an  
issue. We worked with Adaptec to isolate this issue, but no  
resolution has been forthcoming from them. I agree that this does not  
mean that NCQ should be disabled in libata core, but some combination  
of controller/drive/firmware blacklist may need to be managed, as  
distasteful as that is.

>> Today SATA NCQ seems to be an area where few combinations work  
>> well. It seems so bad to me that a whitelist might be better than  
>> a blacklist. That is probably overstating it, but NCQ performance  
>> is certainly a big problem.
>
> Real world testing disagrees with you.  NCQ has been enabled for a  
> while now.  We would have screaming hordes of users if the majority  
> of configurations were problematic.

I didn't say that it is a majority or that it doesn't work, it just  
often doesn't perform. If it didn't work there would be lots of  
howling for sure. I'm also not saying that it is a libata problem. It  
seems mostly to be controller and drive firmware issues - and the odd  
fan issue (if you saw the thread: [BUG 2.6.21-rc3-git9] SATA NCQ  
failure with Samsum HD401LJ).

I guess I am mainly lamenting the current state of SATA/NCQ devices  
and sharing what little I have picked up about it - which is that I  
want SAS disks in my next system!

-- 
Mark Rustad, MRustad@...il.com


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ