[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6601abe90904262229w602e17d8s51ceae05c2895ce5@mail.gmail.com>
Date: Sun, 26 Apr 2009 23:29:39 -0600
From: Curt Wohlgemuth <curtw@...gle.com>
To: Theodore Tso <tytso@....edu>
Cc: Andreas Dilger <adilger@....com>,
ext4 development <linux-ext4@...r.kernel.org>
Subject: Re: Question on block group allocation
Hi Ted:
I don't have access to the actual data right now, because I created
the files and ran the benchmark just before leaving for a few days,
but...
On Sun, Apr 26, 2009 at 8:14 PM, Theodore Tso <tytso@....edu> wrote:
> On Thu, Apr 23, 2009 at 03:02:05PM -0700, Curt Wohlgemuth wrote:
>> > This is likely the "uninit_bg" feature that is causing the allocations
>> > to skip groups which are marked BLOCK_UNINIT. In some sense the benefit
>> > of skipping the block bitmap read during e2fsck is probably not at all
>> > beneficial compared to the cost of the extra seeking during IO. As the
>> > filesystem gets more full, the BLOCK_UNIIT flags would be cleared anyways,
>> > so we might as well just keep the early allocations contiguous.
>
> Well, I tried out Andreas' patch, by doing an rsync copy from my SSD
> root partition to a 5400 rpm laptop drive, and then ran e2fsck and
> dumpe2fs. The results were interesting:
>
> Before Patch After Patch
> Time in seconds Time in seconds
> Real / User/ Sys MB/s Real / User/ Sys MB/s
> Pass 1 8.52 / 2.21 / 0.46 20.43 8.84 / 4.97 / 1.11 19.68
> Pass 2 21.16 / 1.02 / 1.86 11.30 6.54 / 1.77 / 1.78 36.39
> Pass 3 0.01 / 0.00 / 0.00 139.00 0.01 / 0.01 / 0.00 128.90
> Pass 4 0.16 / 0.15 / 0.00 0.00 0.17 / 0.17 / 0.00 0.00
> Pass 5 2.52 / 1.99 / 0.09 0.79 2.31 / 1.78 / 0.06 0.86
> Total 32.40 / 5.11 / 2.49 12.81 17.99 / 8.75 / 2.98 23.01
>
> The surprise is in the gross inspection of the dumpe2fs results:
>
> Before Patch After Patch
> # of non-contig files 762 779
> # of non-contig directories 571 570
> # of BLOCK_UNINIT bg's 307 293
> # of INODE_UNINIT bg's 503 503
>
> So the interesting thing is that the patch only "broke open" an
> additional 14 block groups (out of a 333 block groups in use when the
> filesystem was created with the unpatched kernel). However, this
> allowed the pass 2 directory time to go *down* by over a factor of
> three (from 21.2 seconds with the unpatched ext4 code to 6.5 seconds
> with the the patch.
>
> I think what the patch did was to diminish allocation pressure on the
> first block group in the flex_bg, so we weren't mixing directory and
> regular file contents. This eliminated seeks during pass 2 of e2fsck,
> which was actually a Very Good Thing.
>
>> > A simple change to verify this would be something like the following,
>> > but it hasn't actually been tested.
>>
>> Tell you what: I'll try this out and see if it helps out my test case.
>
> Let me know what this does for your test case. Hopefully the patch
> also makes things better, since this patch is looking very interesting
> right now.
The random read throughput on the 10GB file went from ~16 MB/s to ~22
MB/s after Andreas' patch; the total fragmentation of the file was
much lower than before his patch.
However, the number of extents went up by quite a bit (I don't have
the debugfs output in front of me at the moment, sorry). It seemed
that no extent crossed a block group; I didn't have time to see if
Andreas' patch disabled flex BGs or not, as to what was going on.
I'll be able to send details out on Tuesday.
Curt
>
> Andreas, can I get a Signed-off-by from you for this patch?
>
> Thanks,
>
> - Ted
>
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists