[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49D1295E.7010300@rtr.ca>
Date: Mon, 30 Mar 2009 16:19:42 -0400
From: Mark Lord <lkml@....ca>
To: Chris Mason <chris.mason@...cle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ric Wheeler <rwheeler@...hat.com>,
"Andreas T.Auer" <andreas.t.auer_lkml_73537@...us.ath.cx>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Theodore Tso <tytso@....edu>,
Stefan Richter <stefanr@...6.in-berlin.de>,
Jeff Garzik <jeff@...zik.org>,
Matthew Garrett <mjg59@...f.ucam.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rees <drees76@...il.com>, Jesper Krogh <jesper@...gh.cc>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
Chris Mason wrote:
> On Mon, 2009-03-30 at 14:39 -0400, Mark Lord wrote:
>> Chris Mason wrote:
>>> I had some fun trying things with this, and I've been able to reliably
>>> trigger stalls in write cache of ~60 seconds on my seagate 500GB sata
>>> drive. The worst I saw was 214 seconds.
>> ..
>>
>> I'd be more interested in how you managed that (above),
>> than the quite different test you describe below.
>>
>> Yes, different, I think. The test below just times how long a single
>> chunk of data might stay in-drive cache under constant load,
>> rather than how long it takes to flush the drive cache on command.
>>
>> Right?
>>
>> Still, useful for other stuff.
>>
>
> That's right, it is testing for starvation in a single sector, not for
> how long the cache flush actually takes. But, your remark from higher
> up in the thread was this:
>
> >
> > Anything in the drive's write cache very probably made
> > it to the media within a second or two of arriving there.
..
Yeah, but that was in the context of how long the drive takes
to clear out it's cache when there's a (brief) break in the action.
Still, it's really good to see hard data on a drive that actually
starves itself for an extended period. Very handy insight, that!
> Sorry if I misread things. But the goal is just to show that it really
> does matter if we use a writeback cache with or without barriers. The
> test has two datasets:
>
> 1) An area that is constantly overwritten sequentially
> 2) A single sector that stores a critical bit of data.
>
> #1 is the filesystem log, #2 is the filesystem super. This isn't a
> specialized workload ;)
..
Good points.
I'm thinking of perhaps acquiring an OCZ Vertex SSD.
The 120GB ones apparently have 64MB of RAM inside,
much of which is used to cache data heading to the flash.
I wonder how long it takes to empty out that sucker!
Cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists