[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49D103C1.3010405@rtr.ca>
Date: Mon, 30 Mar 2009 13:39:13 -0400
From: Mark Lord <lkml@....ca>
To: Ric Wheeler <rwheeler@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
"Andreas T.Auer" <andreas.t.auer_lkml_73537@...us.ath.cx>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Theodore Tso <tytso@....edu>,
Stefan Richter <stefanr@...6.in-berlin.de>,
Jeff Garzik <jeff@...zik.org>,
Matthew Garrett <mjg59@...f.ucam.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rees <drees76@...il.com>, Jesper Krogh <jesper@...gh.cc>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
Ric Wheeler wrote:
> Linus Torvalds wrote:
..
>> That's one of the issues. The cost of those flushes can be really
>> quite high, and as mentioned, in the absense of redundancy you don't
>> actually get the guarantees that you seem to think that you get.
>
> I have measured the costs of the write flushes on a variety of devices,
> routinely, a cache flush is on the order of 10-20 ms with a healthy
> s-ata drive.
..
Err, no. Yes, the flush itself will be very quick,
since the drive is nearly always keeping up with the I/O
already (as we are discussing in a separate subthread here!).
But.. the cost of that FLUSH_CACHE command can be quite significant.
To issue it, we first have to stop accepting R/W requests,
and then wait for up to 32 of them currently in-flight to complete.
Then issue the cache-flush, and wait for that to complete.
Then resume R/W again.
And FLUSH_CACHE is a PIO command for most libata hosts,
so it has a multi-microsecond CPU hit as well as the I/O hit,
whereas regular R/W commands will usually use less CPU because
they are usually done via an automated host command queue.
Tiny, but significant. And more so on smaller/slower end-user systems
like netbooks than on datacenter servers, perhaps.
Cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists