[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5c49b0ed0612071433o3a77be20h9b19326bf6a70281@mail.gmail.com>
Date: Thu, 7 Dec 2006 14:33:46 -0800
From: "Nate Diller" <nate.diller@...il.com>
To: "Chen, Kenneth W" <kenneth.w.chen@...el.com>
Cc: "Jens Axboe" <jens.axboe@...cle.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch] speed up single bio_vec allocation
On 12/7/06, Chen, Kenneth W <kenneth.w.chen@...el.com> wrote:
> Nate Diller wrote on Thursday, December 07, 2006 1:46 PM
> > the current code is straightforward and obviously correct. you want
> > to make the alloc/dealloc paths more complex, by special-casing for an
> > arbitrary limit of "small" I/O, AFAICT. of *course* you can expect
> > less overhead when you're doing one large I/O vs. two small ones,
> > that's the whole reason we have all this code to try to coalesce
> > contiguous I/O, do readahead, swap page clustering, etc. we *want*
> > more complexity if it will get us bigger I/Os. I don't see why we
> > want more complexity to reduce the *inherent* penalty of doing smaller
> > ones.
>
> You should check out the latest proposal from Jens Axboe which treats
> all biovec size the same and stuff it along with struct bio. I think
> it is a better approach than my first cut of special casing 1 segment
> biovec. His patch will speed up all sized I/O.
i rather agree with his reservations on that, since we'd be making the
allocator's job harder by requesting order 1 pages for all allocations
on x86_64 large I/O patterns. but it reduces complexity instead of
increasing it ... can you produce some benchmarks not just for your
workload but for one that triggers the order 1 case? biovec-(256)
transfers are more common than you seem to think, and if the allocator
can't do it, that forces the bio code to fall back to 2 x biovec-128,
which, as you indicated above, would show a real penalty.
NATE
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists