lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 24 Jun 2015 21:07:41 +0800
From:	Ming Lei <ming.lei@...onical.com>
To:	linux-kernel@...r.kernel.org,
	Dave Kleikamp <dave.kleikamp@...cle.com>
Cc:	Jens Axboe <axboe@...nel.dk>, Zach Brown <zab@...bo.net>,
	Christoph Hellwig <hch@...radead.org>,
	Maxim Patlasov <mpatlasov@...allels.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Tejun Heo <tj@...nel.org>, Dave Chinner <david@...morbit.com>
Subject: [PATCH v6 0/5] block: loop: improve loop with AIO

Hi Guys,

There are about 3 advantages to use direct I/O and AIO on
read/write loop's backing file:

1) double cache can be avoided, then memory usage gets
decreased a lot

2) not like user space direct I/O, there isn't cost of
pinning pages

3) avoid context switch for obtaining good throughput
- in buffered file read, random I/O throughput is often obtained
only if they are submitted concurrently from lots of tasks; but for
sequential I/O, most of times they can be hit from page cache, so
concurrent submissions often introduce unnecessary context switch
and can't improve throughput much. There was such discussion[1]
to use non-blocking I/O to improve the problem for application.
- with direct I/O and AIO, concurrent submissions can be
avoided and random read throughput can't be affected meantime

So this patchset trys to improve loop via AIO, and about 45% memory
usage can be decreased, see detailed data in commit log of patch4,
also IO throughput isn't affected too.

V6:
        - only patch 4 and patch 5 get updated
        - check lo->lo_offset to decide if direct IO can be supported(4/5)
        - introduce one flag for userspace(losetup) to keep updated
        if using direct I/O to access backing file(4/5)
        - implement patches for util-linux(losetup) so that losetup can
        enable direct I/O feature:(4/5)

           http://kernel.ubuntu.com/git/ming/util-linux.git/log/?h=losetup-dio

        - remove the direct IO control interface from sysfs(4/5)
        - handle partial read in case of direct read (5/5)
        - add more comments for direct IO (5/5)
V5:
        - don't introduce IOCB_DONT_DIRTY_PAGE and bypass dirtying
        for ITER_KVEC and ITER_BVEC direct IO(read), as required by
        Christoph

V4:
        - add detailed commit log for 'use kthread_work'
        - allow userspace(sysfs, losetup) to decide if dio/aio is
        used as suggested by Christoph and Dave Chinner
        - only use dio if the backing block device's min io size
        is 512 as pointed by Dave Chinner & Christoph
V3:
        - based on Al's iov_iter work and Christoph's kiocb changes
        - use kthread_work
        - introduce IOCB_DONT_DIRTY_PAGE flag
        - set QUEUE_FLAG_NOMERGES for loop's request queue
V2:
        - remove 'extra' parameter to aio_kernel_alloc()
        - try to avoid memory allcation inside queue req callback
        - introduce 'use_mq' sysfs file for enabling kernel aio or disabling it
V1:
        - link:
                http://marc.info/?t=140803157700004&r=1&w=2
        - improve failure path in aio_kernel_submit()

 drivers/block/loop.c      | 235 +++++++++++++++++++++++++++++++++++-----------
 drivers/block/loop.h      |  13 +--
 fs/direct-io.c            |   9 +-
 include/uapi/linux/loop.h |   1 +
 4 files changed, 193 insertions(+), 65 deletions(-)

Thanks,
Ming

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ