[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CFEA5AF.2000702@redhat.com>
Date: Tue, 07 Dec 2010 15:22:55 -0600
From: Eric Sandeen <sandeen@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: ext4 development <linux-ext4@...r.kernel.org>,
Jan Kara <jack@...e.cz>
Subject: Re: [PATCH 1/2] ext2: speed up file creates by optimizing rec_len
functions
On 12/7/10 3:07 PM, Andrew Morton wrote:
> On Tue, 07 Dec 2010 11:51:05 -0600
> Eric Sandeen <sandeen@...hat.com> wrote:
>
>> The addition of 64k block capability in the rec_len_from_disk
>> and rec_len_to_disk functions added a bit of math overhead which
>> slows down file create workloads needlessly when the architecture
>> cannot even support 64k blocks, thanks to page size limits.
>>
>> The directory entry checking can also be optimized a bit
>> by sprinkling in some unlikely() conditions to move the
>> error handling out of line.
>>
>> bonnie++ sequential file creates on a 512MB ramdisk speeds up
>> from about 2200/s to about 2500/s, about a 14% improvement.
>>
>
> hrm, that's an improbably-large sounding improvement from eliminating
> just a few test-n-branches from a pretty heavyweight operation.
And yet ...
Yeah, I dunno. Part of it is that ext2_add_link does a linear
search, so when you do rec_len_from_disk 50,000 times on a dir,
that little bit adds up quite badly I suppose.
Retesting at a bunch of different number-of-files in bonnie
(with a small sample size so probably a little noise)
|files per sec|
files stock patched delta
10,000 12300 14700 +19%
20,000 6300 7600 +20%
30,000 4200 5000 +20%
40,000 3150 3700 +17%
50,000 2500 3000 +20%
(again all on a 512MB ramdisk)
*shrug* I'll believe my lyin' eyes, I guess. :)
-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists