[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130514063438.GF29466@dastard>
Date: Tue, 14 May 2013 16:34:38 +1000
From: Dave Chinner <david@...morbit.com>
To: OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Cc: Daniel Phillips <daniel.raymond.phillips@...il.com>,
linux-fsdevel@...r.kernel.org, tux3@...3.org,
linux-kernel@...r.kernel.org
Subject: Re: Tux3 Report: Faster than tmpfs, what?
On Fri, May 10, 2013 at 02:47:35PM +0900, OGAWA Hirofumi wrote:
> Dave Chinner <david@...morbit.com> writes:
>
> >> tux3:
> >> Operation Count AvgLat MaxLat
> >> ----------------------------------------
> >> NTCreateX 1477980 0.003 12.944
> > ....
> >> ReadX 2316653 0.002 0.499
> >> LockX 4812 0.002 0.207
> >> UnlockX 4812 0.001 0.221
> >> Throughput 1546.81 MB/sec 1 clients 1 procs max_latency=12.950 ms
> >
> > Hmmm... No "Flush" operations. Gotcha - you've removed the data
> > integrity operations from the benchmark.
>
> Right. Because tux3 is not implementing fsync() yet. So, I did
>
> grep -v Flush /usr/share/dbench/client.txt > client2.txt
>
> Why is it important for comparing?
Because nobody could reproduce your results without working that
out. You didn't disclose that you'd made these changes, and that
makes it extremely misleading as to what the results mean. Given the
headline-grab nature of it, it's deceptive at best.
I don't care how fast tux3 is - I care about being able to reproduce
other people's results. Hence if you are going to report benchmark
results comparing filesystems then you need to tell everyone exactly
what you've tweaked and why, from the hardware all the way up to the
benchmark config.
Work on how *you* report *your* results - don't let Daniel turn them
into some silly marketing fluff that tries to grab headlines.
-Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists