lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201021153936.GA24818@google.com>
Date:   Wed, 21 Oct 2020 16:39:36 +0100
From:   Alessio Balsini <balsini@...roid.com>
To:     Miklos Szeredi <miklos@...redi.hu>
Cc:     Alessio Balsini <balsini@...roid.com>,
        Akilesh Kailash <akailash@...gle.com>,
        Amir Goldstein <amir73il@...il.com>,
        Antonio SJ Musumeci <trapexit@...wn.link>,
        David Anderson <dvander@...gle.com>,
        Giuseppe Scrivano <gscrivan@...hat.com>,
        Jann Horn <jannh@...gle.com>, Jens Axboe <axboe@...nel.dk>,
        Martijn Coenen <maco@...roid.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Paul Lawrence <paullawrence@...gle.com>,
        Stefano Duo <stefanoduo@...gle.com>,
        Zimuzo Ezeozue <zezeozue@...gle.com>,
        fuse-devel <fuse-devel@...ts.sourceforge.net>,
        kernel-team <kernel-team@...roid.com>,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V9 0/4] fuse: Add support for passthrough read/write

Hi Miklos, all,
 
After being stuck with some strange and hard to reproduce results from my SSD,
I finally decided to overcome the biggest chunk of inconsistencies by
forgetting about the SSD and switching to a RAM block device to host my lower
file system.
Getting rid of the discrete storage device removes a huge component of
slowness, highlighting the performance difference of the software parts (and
probably goodness of CPU cache and its coherence/invalidation mechanisms).
 
More specifically, out of my system's 32 GiB of RAM, I reserved 24 for
/dev/ram0, which has been formatted as ext4.
That file system has been completely filled and then cleaned up before running
the benchmarks to make sure all the memory addresses were marked as used and
removed from the page cache.
 
As for the last time, I've been using a slightly modified libfuse
passthrough_hp.cc example, that simply enables the passthrough mode at every
open/create operation:

  git@...hub.com:balsini/libfuse fuse-passthrough-stable-v.3.9.4
 
The following tests were ran using fio-3.23 with the following configuration:
- bs=4Ki
- size=20Gi
- ioengine=sync
- fsync_on_close=1
- randseed=0
- create_only=0 (set to 1 during a first dry run to create the test file)

As for the tool configuration, the following benchmarks would perform a single
open operation each, focusing on just the read/write perfromance.

The file size of 20 GiB has been chosen to not completely fit the page cache.
 
As mentioned in my previous email, all the caches were dropped before running
every benchmark with
 
  echo 3 > /proc/sys/vm/drop_caches
 
All the benchmarks were run 10 times, with 1 minute cool down between each run.
 
Here the updated results for this patch set:
 
+-----------+-------------+-------------+-------------+
|           |             | FUSE        |             |
| MiB/s     | FUSE        | passthrough | native      |
+-----------+-------------+-------------+-------------+
| read      | 1341(±4.2%) | 1485(±1.1%) |  1634(±.5%) |
+-----------+-------------+-------------+-------------+
| write     |   49(±2.1%) | 1304(±2.6%) | 1363(±3.0%) |
+-----------+-------------+-------------+-------------+
| randread  |   43(±1.3%) | 643(±11.1%) |  715(±1.1%) |
+-----------+-------------+-------------+-------------+
| randwrite |  27(±39.9%) |  763(±1.1%) |  790(±1.0%) |
+-----------+-------------+-------------+-------------+
 
This table shows that FUSE, except for the sequential reads, is left behind
FUSE passthrough and native performance. The extremely good FUSE performance
for sequential reads is the result of a great read-ahead mechanism, that has
been easy to prove by showing that performance dropped after setting
read_ahead_kb to 0.
Except for FUSE randwrite and passthrough randread with respectively ~40% and
~11% standard deviations, all the other results are relatively stable.
Nevertheless, these two standard deviation exceptions are not sufficient to
invalidate the results, that are still showing clear performance benefits.
I'm also kind of happy to see that passthrough, that for each read/write
operation traverses the VFS layer twice, now maintains consistent slightly
lower performance than native.
 
I wanted to make sure the results were consistent before jumping back to your
feedback on the series.
 
Thanks,
Alessio

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ