[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0903300946050.3948@localhost.localdomain>
Date: Mon, 30 Mar 2009 09:58:26 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Mark Lord <lkml@....ca>
cc: Ric Wheeler <rwheeler@...hat.com>,
Chris Mason <chris.mason@...cle.com>,
"Andreas T.Auer" <andreas.t.auer_lkml_73537@...us.ath.cx>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Theodore Tso <tytso@....edu>,
Stefan Richter <stefanr@...6.in-berlin.de>,
Jeff Garzik <jeff@...zik.org>,
Matthew Garrett <mjg59@...f.ucam.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rees <drees76@...il.com>, Jesper Krogh <jesper@...gh.cc>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
On Mon, 30 Mar 2009, Mark Lord wrote:
>
> I spent an entire day recently, trying to see if I could significantly fill
> up the 32MB cache on a 750GB Hitach SATA drive here.
>
> With deliberate/random write patterns, big and small, near and far,
> I could not fill the drive with anything approaching a full second
> of latent write-cache flush time.
>
> Not even close. Which is a pity, because I really wanted to do some testing
> related to a deep write cache. But it just wouldn't happen.
>
> I tried this again on a 16MB cache of a Seagate drive, no difference.
>
> Bummer. :)
Try it with laptop drives. You might get to a second, or at least hundreds
of ms (not counting the spinup delay if it went to sleep, obviously). You
probably tested desktop drives (that 750GB Hitachi one is not a low end
one, and I assume the Seagate one isn't either).
You'll have a much easier time getting long latencies when seeks take tens
of ms, and the platter rotates at some pitiful 3600rpm (ok, I guess those
drives are hard to find these days - I guess 4200rpm is the norm even for
1.8" laptop harddrives).
And also - this is probably obvious to you, but it might not be
immediately obvious to everybody - make sure that you do have TCQ going,
and at full depth. If the drive supports TCQ (and they all do, these days)
it is quite possible that the drive firmware basically limits the write
caching to one segment per TCQ entry (or at least to something smallish).
Why? Because that really simplifies some of the problem space for the
firmware a _lot_ - if you have at least as many segments in your cache as
your max TCQ depth, it means that you always have one segment free to be
re-used without any physical IO when a new command comes in.
And if I were a disk firmware engineer, I'd try my damndest to keep my
problem space simple, so I would do exactly that kind of "limit the number
of dirty cache segments by the queue size" thing.
But I dunno. You may not want to touch those slow laptop drives with a
ten-foot pole. It's certainly not my favorite pastime.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists