[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C7C1D04.1080205@redhat.com>
Date: Mon, 30 Aug 2010 16:05:08 -0500
From: Eric Sandeen <sandeen@...hat.com>
To: Bill Fink <bill@...ard.sci.gsfc.nasa.gov>
CC: "Ted Ts'o" <tytso@....edu>, Bill Fink <billfink@...dspring.com>,
"adilger@....com" <adilger@....com>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"Fink, William E. (GSFC-6061)" <william.e.fink@...a.gov>
Subject: Re: [RFC PATCH] ext4: fix 50% disk write performance regression
Bill Fink wrote:
> On Mon, 30 Aug 2010, Ted Ts'o wrote:
>
>> On Sun, Aug 29, 2010 at 11:11:26PM -0400, Bill Fink wrote:
>>> A 50% ext4 disk write performance regression was introduced
>>> in 2.6.32 and still exists in 2.6.35, although somewhat improved
>>> from 2.6.32. Read performance was not affected).
>> Thanks for reporting it. I'm going to have to take a closer look at
>> why this makes a difference. I'm going to guess though that what's
>> going on is that we're posting writes in such a way that they're no
>> longer aligned or ending at the end of a RAID5 stripe, causing a
>> read-modify-write pass. That would easily explain the write
>> performance regression.
>
> I'm not sure I understand. How could calling or not calling
> ext4_num_dirty_pages() (unpatched versus patched 2.6.35 kernel)
> affect the write alignment?
>
> I was wondering if the locking being done in ext4_num_dirty_pages()
> could somehow be affecting the performance. I did notice from top
> that in the patched 2.6.35 kernel, the I/O wait time was generally
> in the 60-65% range, while in the unpatched 2.6.35 kernel, it was
> at a higher 75-80% range. However, I don't know if that's just a
> result of the lower performance, or a possible clue to its cause.
Using oprofile might also show you how much time is getting spent there..
>> The interesting thing is that we don't actually do anything in
>> ext4_da_writepages() to assure that we are making our writes are
>> appropriate aligned and sized. We do pay attention to make sure they
>> are alligned correctly in the allocator, but _not_ in the writepages
>> code. So the fact that apparently things were well aligned in 2.6.32
>> seems to be luck... (or maybe the writes are perfectly aligned in
>> 2.6.32; they're just much worse with 2.6.35, and with explicit
>> attention paid to the RAID stripe size, we could do even better :-)
>
> It was 2.6.31 that was good. The regression was in 2.6.32. And again
> how does the write alignment get modified simply by whether or not
> ext4_num_dirty_pages() is called?
writeback is full of deep mysteries ... :)
>> If you could run blktraces on 2.6.32, 2.6.35 stock, and 2.6.35 with
>> your patch, that would be really helpful to confirm my hypothesis. Is
>> that something that wouldn't be too much trouble?
>
> I'd be glad to if you explain how one runs blktraces.
Probably the easiest thing to do is to use seekwatcher to invoke blktrace,
if it's easily available for your distro. Then it's just mount debugfs on
/sys/kernel/debug, and:
# seekwatcher -d /dev/whatever -t tracename -o tracename.png -p "your dd command"
It'll leave tracename.* blktrace files, and generate a graph of the IO
in the PNG file.
(this causes an abbreviated trace, but it's probably enough to see what
boundaries the IO was issued on)
Thanks!
-Eric
> -Thanks
>
> -Bill
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists