[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20120817153735.GB31297@thunk.org>
Date: Fri, 17 Aug 2012 11:37:35 -0400
From: Theodore Ts'o <tytso@....edu>
To: Fengguang Wu <fengguang.wu@...il.com>
Cc: linux RAID <linux-raid@...r.kernel.org>, NeilBrown <neilb@...e.de>,
Li Shaohua <shli@...ionio.com>,
Marti Raudsepp <marti@...fo.org>,
Kernel hackers <linux-kernel@...r.kernel.org>,
ext4 hackers <linux-ext4@...r.kernel.org>, maze@...gle.com,
"Shi, Alex" <alex.shi@...el.com>, linux-fsdevel@...r.kernel.org
Subject: Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
On Fri, Aug 17, 2012 at 11:13:18PM +0800, Fengguang Wu wrote:
>
> Obviously the major regressions happen to the 100dd over raid cases.
> Some 10dd cases are also impacted.
>
> The attached graphs show that everything becomes more fluctuated in
> 3.6.0-rc1 for the lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-1 case.
Hmm... I'm not seeing any differences in the block allocation code, or
in ext4's buffered writeback code paths, which would be the most
likely cause of such problems. Maybe a quick eyeball of the blktrace
to see if we're doing something pathalogically stupid?
You could also try running a filefrag -v on a few of the dd files to
see if there's any significant difference, although as I said, there
doesn't look like there was any significant changes in the block
allocation code between v3.5 and v3.6-rc1 --- although I suppose
changes in timeing could have have caused the block allocation
decisions to be different, so it's worth checking that out.
Thanks, regards,
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists