lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220329094048.2107094-1-yukuai3@huawei.com>
Date:   Tue, 29 Mar 2022 17:40:42 +0800
From:   Yu Kuai <yukuai3@...wei.com>
To:     <axboe@...nel.dk>, <andriy.shevchenko@...ux.intel.com>,
        <john.garry@...wei.com>, <ming.lei@...hat.com>
CC:     <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <yukuai3@...wei.com>, <yi.zhang@...wei.com>
Subject: [PATCH -next RFC 0/6] improve large random io for HDD

There is a defect for blk-mq compare to blk-sq, specifically split io
will end up discontinuous if the device is under high io pressure, while
split io will still be continuous in sq, this is because:

1) split bio is issued one by one, if one bio can't get tag, it will go
to wail. - patch 2
2) each time 8(or wake batch) requests is done, 8 waiters will be woken up.
Thus if a thread is woken up, it will unlikey to get multiple tags.
- patch 3,4
3) new io can preempt tag even if there are lots of threads waiting for
tags. - patch 5

Test environment:
x86 vm, nr_requests is set to 64, queue_depth is set to 32 and
max_sectors_kb is set to 128.

I haven't tested this patchset on physical machine yet, I'll try later
if anyone thinks this approch is meaningful.

Fio test cmd:
[global]
filename=/dev/sda
ioengine=libaio
direct=1
offset_increment=100m

[test]
rw=randwrite
bs=512k
numjobs=256
iodepth=2

Result: raito of sequential io(calculated from log by blktrace)
original:
21%
patched: split io thoroughly and wake up based on required tags.
40%
patched and set flag: disable tag preemption.
69%

Yu Kuai (6):
  blk-mq: add a new flag 'BLK_MQ_F_NO_TAG_PREEMPTION'
  block: refactor to split bio thoroughly
  blk-mq: record how many tags are needed for splited bio
  sbitmap: wake up the number of threads based on required tags
  blk-mq: don't preempt tag expect for split bios
  sbitmap: force tag preemption if free tags are sufficient

 block/bio.c               |  2 +
 block/blk-merge.c         | 95 ++++++++++++++++++++++++++++-----------
 block/blk-mq-debugfs.c    |  1 +
 block/blk-mq-tag.c        | 39 +++++++++++-----
 block/blk-mq.c            | 14 +++++-
 block/blk-mq.h            |  7 +++
 block/blk.h               |  3 +-
 include/linux/blk-mq.h    |  7 ++-
 include/linux/blk_types.h |  6 +++
 include/linux/sbitmap.h   |  8 ++++
 lib/sbitmap.c             | 33 +++++++++++++-
 11 files changed, 173 insertions(+), 42 deletions(-)

-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ