lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEsagEism-SQB9LRC+EuPLoBiFOEHc0peLjh3cDxwrNe3Jz=2w@mail.gmail.com>
Date:	Fri, 10 May 2013 23:12:27 -0700
From:	Daniel Phillips <daniel.raymond.phillips@...il.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	linux-kernel@...r.kernel.org, tux3@...3.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: Tux3 Report: Faster than tmpfs, what?

Hi Dave,

Thanks for the catch - I should indeed have noted that "modified
dbench" was used for this benchmark, thus amplifying Tux3's advantage
in delete performance. This literary oversight does not make the
results any less interesting: we beat Tmpfs on that particular load.
Beating tmpfs at anything is worthy of note. Obviously, all three
filesystems ran the same load.

We agree that "classic unadulterated dbench" is an important Linux
benchmark for comparison with other filesystems. I think we should
implement a proper fsync for that one and not just use fsync = sync.
That isn't very far in the future, however our main focus right now is
optimizing spinning disk allocation. It probably makes logistical
sense to leave fsync as it is for now and concentrate on the more
important issues.

I do not agree with your assertion that the benchmark as run is
invalid, only that the modified load should have been described in
detail. I presume you would like to see a new bakeoff using "classic"
dbench. Patience please, this will certainly come down the pipe in due
course. We might not beat Tmpfs on that load but we certainly expect
to outperform some other filesystems.

Note that Tux3 ran this benchmark using its normal strong consistency
semantics, roughly similar to Ext4's data=journal. In that light, the
results are even more interesting.

> ...you've done that so the front end of tux3 won't
> encounter any blocking operations and so can offload 100% of
> operations.

Yes, that is the entire point of our front/back design: reduce
application latency for buffered filesystem transactions.

> It also explains the sync call every 4 seconds to keep
> tux3 back end writing out to disk so that a) all the offloaded work
> is done by the sync process and not measured by the benchmark, and
> b) so the front end doesn't overrun queues and throttle or run out
> of memory.

Entirely correct. That's really nice, don't you think? You nicely
described a central part of Tux3's design: our "delta" mechanism. We
expect to spend considerable effort tuning the details of our delta
transition behaviour as time goes by. However this is not an immediate
priority because the simplistic "flush every 4 seconds" hack already
works pretty well for a lot of loads.

Thanks for your feedback,

Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ