lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+icZUWfX+QmroE6j74C7o-BdfMF5=6PdYrA=5W_JCKddqkJgQ@mail.gmail.com>
Date:   Thu, 28 May 2020 19:02:35 +0200
From:   Sedat Dilek <sedat.dilek@...il.com>
To:     Jens Axboe <axboe@...nel.dk>
Cc:     io-uring@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        akpm@...ux-foundation.org
Subject: Re: [PATCHSET v5 0/12] Add support for async buffered reads

On Tue, May 26, 2020 at 10:59 PM Jens Axboe <axboe@...nel.dk> wrote:
>
> We technically support this already through io_uring, but it's
> implemented with a thread backend to support cases where we would
> block. This isn't ideal.
>
> After a few prep patches, the core of this patchset is adding support
> for async callbacks on page unlock. With this primitive, we can simply
> retry the IO operation. With io_uring, this works a lot like poll based
> retry for files that support it. If a page is currently locked and
> needed, -EIOCBQUEUED is returned with a callback armed. The callers
> callback is responsible for restarting the operation.
>
> With this callback primitive, we can add support for
> generic_file_buffered_read(), which is what most file systems end up
> using for buffered reads. XFS/ext4/btrfs/bdev is wired up, but probably
> trivial to add more.
>
> The file flags support for this by setting FMODE_BUF_RASYNC, similar
> to what we do for FMODE_NOWAIT. Open to suggestions here if this is
> the preferred method or not.
>
> In terms of results, I wrote a small test app that randomly reads 4G
> of data in 4K chunks from a file hosted by ext4. The app uses a queue
> depth of 32. If you want to test yourself, you can just use buffered=1
> with ioengine=io_uring with fio. No application changes are needed to
> use the more optimized buffered async read.
>
> preadv for comparison:
>         real    1m13.821s
>         user    0m0.558s
>         sys     0m11.125s
>         CPU     ~13%
>
> Mainline:
>         real    0m12.054s
>         user    0m0.111s
>         sys     0m5.659s
>         CPU     ~32% + ~50% == ~82%
>
> This patchset:
>         real    0m9.283s
>         user    0m0.147s
>         sys     0m4.619s
>         CPU     ~52%
>
> The CPU numbers are just a rough estimate. For the mainline io_uring
> run, this includes the app itself and all the threads doing IO on its
> behalf (32% for the app, ~1.6% per worker and 32 of them). Context
> switch rate is much smaller with the patchset, since we only have the
> one task performing IO.
>
> Also ran a simple fio based test case, varying the queue depth from 1
> to 16, doubling every time:
>
> [buf-test]
> filename=/data/file
> direct=0
> ioengine=io_uring
> norandommap
> rw=randread
> bs=4k
> iodepth=${QD}
> randseed=89
> runtime=10s
>
> QD/Test         Patchset IOPS           Mainline IOPS
> 1               9046                    8294
> 2               19.8k                   18.9k
> 4               39.2k                   28.5k
> 8               64.4k                   31.4k
> 16              65.7k                   37.8k
>
> Outside of my usual environment, so this is just running on a virtualized
> NVMe device in qemu, using ext4 as the file system. NVMe isn't very
> efficient virtualized, so we run out of steam at ~65K which is why we
> flatline on the patched side (nvme_submit_cmd() eats ~75% of the test app
> CPU). Before that happens, it's a linear increase. Not shown is context
> switch rate, which is massively lower with the new code. The old thread
> offload adds a blocking thread per pending IO, so context rate quickly
> goes through the roof.
>
> The goal here is efficiency. Async thread offload adds latency, and
> it also adds noticable overhead on items such as adding pages to the
> page cache. By allowing proper async buffered read support, we don't
> have X threads hammering on the same inode page cache, we have just
> the single app actually doing IO.
>
> Been beating on this and it's solid for me, and I'm now pretty happy
> with how it all turned out. Not aware of any missing bits/pieces or
> code cleanups that need doing.
>
> Series can also be found here:
>
> https://git.kernel.dk/cgit/linux-block/log/?h=async-buffered.5
>
> or pull from:
>
> git://git.kernel.dk/linux-block async-buffered.5
>

Hi Jens,

I have pulled linux-block.git#async-buffered.5 on top of Linux v5.7-rc7.

>From first feelings:
The booting into the system (until sddm display-login-manager) took a
bit longer.
The same after login and booting into KDE/Plasma.

I am building/linking with LLVM/Clang/LLD v10.0.1-rc1 on Debian/testing AMD64.

Here I have an internal HDD (SATA) and my Debian-system is on an
external HDD connected via USB-3.0.
Primarily, I use Ext4-FS.

As said above is the "emotional" side, but I need some technical instructions.

How can I see Async Buffer Reads is active on a Ext4-FS-formatted partition?

Do I need a special boot-parameter (GRUB line)?

Do I need to activate some cool variables via sysfs?

Do I need to pass an option via fstab entry?

Are any Async Buffer Reads related linux-kconfig options not set?
Which make sense?

I am asking all this before doing some FIO testing.

Attached are my linux-config and dmesg-output files.

Thanks.

Regards,
- Sedat -


>  fs/block_dev.c            |   2 +-
>  fs/btrfs/file.c           |   2 +-
>  fs/ext4/file.c            |   2 +-
>  fs/io_uring.c             | 130 ++++++++++++++++++++++++++++++++++++--
>  fs/xfs/xfs_file.c         |   2 +-
>  include/linux/blk_types.h |   3 +-
>  include/linux/fs.h        |  10 ++-
>  include/linux/pagemap.h   |  67 ++++++++++++++++++++
>  mm/filemap.c              | 111 ++++++++++++++++++++------------
>  9 files changed, 279 insertions(+), 50 deletions(-)
>
> Changes since v5:
> - Correct commit message, iocb->private -> iocb->ki_waitq
> - Get rid of io_uring goto, use an iter read helper
> Changes since v3:
> - io_uring: don't retry if REQ_F_NOWAIT is set
> - io_uring: alloc req->io if the request type didn't already
> - Add iocb->ki_waitq instead of (ab)using iocb->private
> Changes since v2:
> - Get rid of unnecessary wait_page_async struct, just use wait_page_async
> - Add another prep handler, adding wake_page_match()
> - Use wake_page_match() in both callers
> Changes since v1:
> - Fix an issue with inline page locking
> - Fix a potential race with __wait_on_page_locked_async()
> - Fix a hang related to not setting page_match, thus missing a wakeup
>
> --
> Jens Axboe
>
>

View attachment "dmesg-T_5.7.0-rc7-4-amd64-clang.txt" of type "text/plain" (70719 bytes)

Download attachment "config-5.7.0-rc7-4-amd64-clang" of type "application/octet-stream" (229503 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ