[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0903252036130.8872@blonde.anvils>
Date: Wed, 25 Mar 2009 20:41:17 +0000 (GMT)
From: Hugh Dickins <hugh@...itas.com>
To: Jens Axboe <jens.axboe@...cle.com>
cc: Ric Wheeler <rwheeler@...hat.com>, Jeff Garzik <jeff@...zik.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Theodore Tso <tytso@....edu>, Ingo Molnar <mingo@...e.hu>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Arjan van de Ven <arjan@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Nick Piggin <npiggin@...e.de>, David Rees <drees76@...il.com>,
Jesper Krogh <jesper@...gh.cc>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
On Wed, 25 Mar 2009, Jens Axboe wrote:
> On Wed, Mar 25 2009, Ric Wheeler wrote:
> > Jens Axboe wrote:
> >>
> >> Another problem is that FLUSH_CACHE sucks. Really. And not just on
> >> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
> >> wit for the world to finish. Pretty hard to teach people to use a nicer
> >> fdatasync(), when the majority of the cost now becomes flushing the
> >> cache of that 1TB drive you happen to have 8 partitions on. Good luck
> >> with that.
> >>
> > And, as I am sure that you do know, to add insult to injury, FLUSH_CACHE
> > is per device (not file system).
> >
> > When you issue an fsync() on a disk with multiple partitions, you will
> > flush the data for all of its partitions from the write cache....
>
> Exactly, that's what my (vague) 8 partition reference was for :-)
> A range flush would be so much more palatable.
Tangential question, but am I right in thinking that BIO_RW_BARRIER
similarly bars across all partitions, whereas its WRITE_BARRIER and
DISCARD_BARRIER users would actually prefer it to apply to just one?
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists