lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87fvxvz8qw.fsf@devron.myhome.or.jp>
Date:	Fri, 10 May 2013 14:47:35 +0900
From:	OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
To:	Dave Chinner <david@...morbit.com>
Cc:	Daniel Phillips <daniel.raymond.phillips@...il.com>,
	linux-fsdevel@...r.kernel.org, tux3@...3.org,
	linux-kernel@...r.kernel.org
Subject: Re: Tux3 Report: Faster than tmpfs, what?

Dave Chinner <david@...morbit.com> writes:

>> tux3:
>>     Operation      Count    AvgLat    MaxLat
>>     ----------------------------------------
>>     NTCreateX    1477980     0.003    12.944
> ....
>>     ReadX        2316653     0.002     0.499
>>     LockX           4812     0.002     0.207
>>     UnlockX         4812     0.001     0.221
>>     Throughput 1546.81 MB/sec  1 clients  1 procs  max_latency=12.950 ms
>
> Hmmm... No "Flush" operations. Gotcha - you've removed the data
> integrity operations from the benchmark.

Right. Because tux3 is not implementing fsync() yet. So, I did

	grep -v Flush /usr/share/dbench/client.txt > client2.txt

Why is it important for comparing?

> Ah, I get it now - you've done that so the front end of tux3 won't
> encounter any blocking operations and so can offload 100% of
> operations. It also explains the sync call every 4 seconds to keep
> tux3 back end writing out to disk so that a) all the offloaded work
> is done by the sync process and not measured by the benchmark, and
> b) so the front end doesn't overrun queues and throttle or run out
> of memory.

Our backend is still using debugging mode (flush each 10 transactions
for stress/debugging). Since no interface to use normal writeback
timing yet, and I'm not tackling about it yet.

And if normal writeback can't beat crappy fixed timing (4 secs), Rather,
it means we have to improve writeback timing. I.e. sync should be rather
slower than best timing, right?

> Oh, so nicely contrived. But terribly obvious now that I've found
> it.  You've carefully crafted the benchmark to demonstrate a best
> case workload for the tux3 architecture, then carefully not
> measured the overhead of the work tux3 has offloaded, and then not
> disclosed any of this in the hope that all people will look at is
> the headline.
>
> This would make a great case study for a "BenchMarketing For
> Dummies" book.

Simply wrong. I did this to start optimization of tux3 (We know we have
many places to optimize in tux3), but the result was that post. If you
can't see at all what we did by frontend/backend design from that, I'm a
bit sad for it.

I think I can improve tmpfs/ext4 like tux3 (Unlink/Deltree) if I want to
do, from this result.

Thanks.
-- 
OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ