[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120525204651.GA24246@redhat.com>
Date: Fri, 25 May 2012 16:46:51 -0400
From: Mike Snitzer <snitzer@...hat.com>
To: Kent Overstreet <koverstreet@...gle.com>
Cc: linux-kernel@...r.kernel.org, linux-bcache@...r.kernel.org,
dm-devel@...hat.com, linux-fsdevel@...r.kernel.org,
axboe@...nel.dk, yehuda@...newdream.net, mpatocka@...hat.com,
vgoyal@...hat.com, bharrosh@...asas.com, tj@...nel.org,
sage@...dream.net, agk@...hat.com, drbd-dev@...ts.linbit.com,
Dave Chinner <dchinner@...hat.com>
Subject: Re: [PATCH v3 14/16] Gut bio_add_page()
On Fri, May 25 2012 at 4:25pm -0400,
Kent Overstreet <koverstreet@...gle.com> wrote:
> Since generic_make_request() can now handle arbitrary size bios, all we
> have to do is make sure the bvec array doesn't overflow.
I'd love to see the merge_bvec stuff go away but it does serve a
purpose: filesystems benefit from accurately building up much larger
bios (based on underlying device limits). XFS has leveraged this for
some time and ext4 adopted this (commit bd2d0210cf) because of the
performance advantage.
So if you don't have a mechanism for the filesystem's IO to have
accurate understanding of the limits of the device the filesystem is
built on (merge_bvec was the mechanism) and are leaning on late
splitting does filesystem performance suffer?
Would be nice to see before and after XFS and ext4 benchmarks against a
RAID device (level 5 or 6). I'm especially interested to get Dave
Chinner's and Ted's insight here.
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists