lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZXKTW7z3UH1kPvod@fedora>
Date:   Fri, 8 Dec 2023 11:54:03 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Keith Busch <kbusch@...nel.org>
Cc:     Li Feng <fengli@...rtx.com>, Jens Axboe <axboe@...nel.dk>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
        "open list:BLOCK LAYER" <linux-block@...r.kernel.org>,
        open list <linux-kernel@...r.kernel.org>,
        "open list:VIRTIO BLOCK AND SCSI DRIVERS" 
        <virtualization@...ts.linux.dev>
Subject: Re: [PATCH] virtio_blk: set the default scheduler to none

On Thu, Dec 07, 2023 at 07:44:37PM -0700, Keith Busch wrote:
> On Fri, Dec 08, 2023 at 10:00:36AM +0800, Ming Lei wrote:
> > On Thu, Dec 07, 2023 at 12:31:05PM +0800, Li Feng wrote:
> > > virtio-blk is generally used in cloud computing scenarios, where the
> > > performance of virtual disks is very important. The mq-deadline scheduler
> > > has a big performance drop compared to none with single queue. In my tests,
> > > mq-deadline 4k readread iops were 270k compared to 450k for none. So here
> > > the default scheduler of virtio-blk is set to "none".
> > 
> > The test result shows you may not test HDD. backing of virtio-blk.
> > 
> > none can lose IO merge capability more or less, so probably sequential IO perf
> > drops in case of HDD backing.
> 
> More of a curiosity, as I don't immediately even have an HDD to test
> with! Isn't it more useful for the host providing the backing HDD use an
> appropriate IO scheduler? virtio-blk has similiarities with a stacking
> block driver, and we usually don't need to stack IO schedulers.

dm-rq actually uses IO scheduler at high layer, and early merge has some
benefits:

1) virtio-blk inflight requests are reduced, so less chance to throttle
inside VM, meantime less IOs(bigger size) are handled by QEMU, and submitted
to host side queue.

2) early merge in VM is cheap than host side, since there can be more block
IOs originated from different virtio-blk/scsi devices at the same time and
all images can be stored in single disk, then these IOs become interleaved in
host side queue, so sequential IO may become random or hard to merge.

As Jens mentioned, it needs actual test.


Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ