[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAK6Zt1xHuc76GHgcVtxhZPqAK4413Jcb4aWj9DNUUXFrBfrAg@mail.gmail.com>
Date: Thu, 28 Jul 2011 15:10:20 -0700
From: Daniel Ehrenberg <dehrenberg@...gle.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] dio: Fast-path for page-aligned IOs
On Wed, Jul 27, 2011 at 2:08 PM, Christoph Hellwig <hch@...radead.org> wrote:
> On Mon, Jun 20, 2011 at 04:17:35PM -0700, Dan Ehrenberg wrote:
>> The fast path does not apply for operations of the wrong size
>> or alignmnent, or for operations on raw drives with 512-byte sectors.
>> It might be possible to make this special case a little more general
>> while maintaining its performance benefits, but I do not believe that
>> the full performance benefits can be achieved without resorting to
>> special handling of simple cases, as is done in this patch.
>
> Did you check how this compares to Andis small optimizations?
I'm having a little trouble getting his patch working. I hope to have
this data soon, but I've been distracted by some other things.
>
> Also operations on raw disks are something people with fast devices
> care about a lot. We often hear about benchmark regressions due to
> stupid little things in the direct I/O code.
>
> If we want to special case something that would be a very easy target,
> with a 1:1 mapping of logical to physical blocks and thus no need
> to call the allocator first, and no need for any kind of locking
> or alignment handling.
Are you talking about special-casing a raw block device? I'd like the
optimization to also work with a file system to support a particular
workload I've been looking at.
Thanks,
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists