[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B435B34.20003@austin.ibm.com>
Date: Tue, 05 Jan 2010 09:31:00 -0600
From: Steven Pratt <slpratt@...tin.ibm.com>
To: Dave Chinner <david@...morbit.com>
CC: Chris Mason <chris.mason@...cle.com>, tytso@....edu,
Evgeniy Polyakov <zbr@...emap.net>,
Peter Grandi <pg_jf2@....for.sabi.co.UK>, xfs@....sgi.com,
reiserfs-devel@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-btrfs@...r.kernel.org, jfs-discussion@...ts.sourceforge.net,
ext-users <ext3-users@...hat.com>, linux-nilfs@...r.kernel.org
Subject: Re: [Jfs-discussion] benchmark results
Dave Chinner wrote:
> On Mon, Jan 04, 2010 at 11:27:48AM -0500, Chris Mason wrote:
>
>> On Fri, Dec 25, 2009 at 11:11:46AM -0500, tytso@....edu wrote:
>>
>>> On Fri, Dec 25, 2009 at 02:46:31AM +0300, Evgeniy Polyakov wrote:
>>>
>>>>> [1] http://samba.org/ftp/tridge/dbench/README
>>>>>
>>>> Was not able to resist to write a small notice, what no matter what, but
>>>> whatever benchmark is running, it _does_ show system behaviour in one
>>>> or another condition. And when system behaves rather badly, it is quite
>>>> a common comment, that benchmark was useless. But it did show that
>>>> system has a problem, even if rarely triggered one :)
>>>>
>>> If people are using benchmarks to improve file system, and a benchmark
>>> shows a problem, then trying to remedy the performance issue is a good
>>> thing to do, of course. Sometimes, though the case which is
>>> demonstrated by a poor benchmark is an extremely rare corner case that
>>> doesn't accurately reflect common real-life workloads --- and if
>>> addressing it results in a tradeoff which degrades much more common
>>> real-life situations, then that would be a bad thing.
>>>
>>> In situations where benchmarks are used competitively, it's rare that
>>> it's actually a *problem*. Instead it's much more common that a
>>> developer is trying to prove that their file system is *better* to
>>> gullible users who think that a single one-dimentional number is
>>> enough for them to chose file system X over file system Y.
>>>
>> [ Look at all this email from my vacation...sorry for the delay ]
>>
>> It's important that people take benchmarks from filesystem developers
>> with a big grain of salt, which is one reason the boxacle.net results
>> are so nice. Steve more than willing to take patches and experiment to
>> improve a given FS results, but his business is a fair representation of
>> performance and it shows.
>>
>
> Just looking at the results there, I notice that the RAID system XFS
> mailserver results dropped by an order of magnitude between
> 2.6.29-rc2 and 2.6.31. The single disk results are pretty
> much identical across the two kernels.
>
> IIRC, in 2.6.31 RAID0 started passing barriers through so I suspect
> this is the issue. However, seeing as dmesg is not collected by
> the scripts after the run and the output of the mounttab does
> not show default options, I cannot tell if this is the case.
Well the dmesg collection is done by the actual benchmark run which
occurs after the mount command is issued, so if you are looking for
dmesg related to mounting the xfs volume, it should be in the dmesg we
did collect. If dmesg actually formatted timestamps, this would be
easier to see. It seems that nothing from xfs is ending up in dmesg
since we are running xfs with different threads counts in order without
reboot, so the dmesg for 16 thread xfs is run right after 1 thread xfs,
but the dmesg show ext3 as the last thing, so safe to say no output from
xfs is ending up in dmesg at all.
> This
> might be worth checking by running XFS with the "nobarrier" mount
> option....
>
I could give that a try for you.
> FWIW, is it possible to get these benchmarks run on each filesystem for
> each kernel release so ext/xfs/btrfs all get some regular basic
> performance regression test coverage?
>
Possible yes. Just need to find the time to do the runs, and more
importantly postprocess the data in some meaningful way. I'll see what
I can do.
Steve
> Cheers,
>
> Dave.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists