[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100830214420.51c920de.billfink@mindspring.com>
Date: Mon, 30 Aug 2010 21:44:20 -0400
From: Bill Fink <billfink@...dspring.com>
To: Justin Maggard <jmaggard10@...il.com>
Cc: "Ted Ts'o" <tytso@....edu>,
Bill Fink <bill@...ard.sci.gsfc.nasa.gov>,
"adilger@....com" <adilger@....com>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"Fink, William E. (GSFC-6061)" <william.e.fink@...a.gov>
Subject: Re: [RFC PATCH] ext4: fix 50% disk write performance regression
On Mon, 30 Aug 2010, Justin Maggard wrote:
> On Mon, Aug 30, 2010 at 5:37 PM, Ted Ts'o <tytso@....edu> wrote:
> > On Mon, Aug 30, 2010 at 04:49:58PM -0400, Bill Fink wrote:
> >> > Thanks for reporting it. I'm going to have to take a closer look at
> >> > why this makes a difference. I'm going to guess though that what's
> >> > going on is that we're posting writes in such a way that they're no
> >> > longer aligned or ending at the end of a RAID5 stripe, causing a
> >> > read-modify-write pass. That would easily explain the write
> >> > performance regression.
> >>
> >> I'm not sure I understand. How could calling or not calling
> >> ext4_num_dirty_pages() (unpatched versus patched 2.6.35 kernel)
> >> affect the write alignment?
> >
> > Suppose you have 8 disks, with stripe size of 16k. Assuming that
> > you're only using one parity disk (i.e., RAID 5) and no spare disks,
> > that means the optimal I/O size is 7*16k == 112k. If we do a write
> > which is smaller than 112k, or which is not a multiple of 112k, then
> > the RAID subsystem will need to do a read-modify-write to update the
> > parity disk. Furthermore, the write had better be aligned on an 112k
> > byte boundary. The block allocator will guarantee that block #0 is
> > aligned on a 112k block, but writes have to also be right size in
> > order to avoid the read-modify-write.
> >
> > If we end up doing very small writes, then it can end up being quite
> > disatrous for write performance.
>
> I'd have to agree that this is likely the case. Just to add a little
> more data here, I tried the same 32GB dd test against a 12-disk MD
> RAID 6 64k chunk array today with and without the patch (although
> against a 2.6.33.7 kernel), and my write performance dropped from
> ~420MB/sec down to 350MB/sec when I used the patched kernel.
I'm curious. Since you're using 12 disks where I was only
using 8, I'm wondering what performance you would get if you
changed the multiplier to say 16, i.e.
desired_nr_to_write = wbc->nr_to_write * 16;
It seems you should be getting better than 420 MB/sec on a
12-disk raid, although perhaps the overhead of doing RAID6
is an issue. I use md RAID0 to combine 2 of the hardware
RAID5 arrays (total of 16 disks), and I'm seeing (with my
patch) 1.3 GB/sec write performance.
-Bill
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists