[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46128175.2090506@redhat.com>
Date: Tue, 03 Apr 2007 12:31:49 -0400
From: Chris Snook <csnook@...hat.com>
To: Paa Paa <paapaa125@...mail.com>
CC: linux-kernel@...r.kernel.org
Subject: Re: Lower HD transfer rate with NCQ enabled?
Paa Paa wrote:
> I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
> throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
> enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer
> rate was measured using "hdparm -t":
>
> With NCQ (queue_depth == 31): 50MB/s.
> Without NCQ (queue_depth == 0): 60MB/s.
>
> 20% difference is quite a lot. This is with Intel ICH8R controller and
> Western Digital WD1600YS hard disk in AHCI mode. I also used the next
> command to cat-copy a biggish (540MB) file and time it:
>
> rm temp && sync && time sh -c 'cat quite_big_file > temp && sync'
>
> Here I noticed no differences at all with and without NCQ. The times
> (real time) were basically the same in many successive runs. Around 19s.
>
> Q: What conclusion can I make on "hdparm -t" results or can I make any
> conclusions? Do I really have lower performance with NCQ or not? If I
> do, is this because of my HD or because of kernel?
hdparm -t is a perfect example of a synthetic benchmark. NCQ was
designed to optimize real-world workloads. The overhead gets hidden
pretty well when there are multiple requests in flight simultaneously,
as tends to be the case when you have a user thread reading data while a
kernel thread is asynchronously flushing the user thread's buffered
writes. Given that you're breaking even with one user thread and one
kernel thread doing I/O, you'll probably get performance improvements
with higher thread counts.
-- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists