lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAEK8JBDpy+AvCfEZ5UAM+FojFBK2Cy1BqsrctkX-6QD6jvTK4w@mail.gmail.com>
Date:   Tue, 16 Oct 2018 10:26:08 +0800
From:   Feng Li <lifeng1519@...il.com>
To:     amit@...nel.org
Cc:     dgilbert@...hat.com, virtualization@...ts.linux-foundation.org,
        linux-kernel <linux-kernel@...r.kernel.org>,
        qemu-discuss@...gnu.org, qemu-devel@...gnu.org,
        "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>
Subject: Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance

Hi Amit,

Thanks for your response.

See inline comments.

Amit Shah <amit@...nel.org> 于2018年10月16日周二 上午2:51写道:
>
> On (Thu) 11 Oct 2018 [18:15:41], Feng Li wrote:
> > Add Amit Shah.
> >
> > After some tests, we found:
> > - the virtio serial port number is inversely proportional to the iSCSI
> > virtio-blk-pci performance.
> > If we set the virio-serial ports to 2("<controller
> > type='virtio-serial' index='0' ports='2'/>), the performance downgrade
> > is minimal.
>
> If you use multiple virtio-net (or blk) devices -- just register, not
> necessarily use -- does that also bring the performance down?  I

Yes. We just register the virtio-serial, and not use it, it brings the
virtio-blk performance down.

> suspect it's the number of interrupts that get allocated for the
> ports.  Also, could you check if MSI is enabled?  Can you try with and
> without?  Can you also reproduce if you have multiple virtio-serial
> controllers with 2 ports each (totalling up to whatever number that
> reproduces the issue).

This is the full cmd:
/usr/libexec/qemu-kvm -name
guest=6a798fde-c5d0-405a-b495-f2726f9d12d5,debug-threads=on -machine
pc-i440fx-rhel7.5.0,accel=kvm,usb=off,dump-guest-core=off -cpu host -m
size=2097152k,slots=255,maxmem=4194304000k   -uuid
702bb5bc-2aa3-4ded-86eb-7b9cf5c1e2d9 -drive
file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=74,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-drive file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=182,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk1,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1,bootindex=2,write-cache=on
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -vnc
0.0.0.0:100 -netdev user,id=fl.1,hostfwd=tcp::5555-:22 -device
e1000,netdev=fl.1 -msg timestamp=on   -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5

qemu version: qemu-kvm-2.10.0-21

I guess the MSI is enabled, I could see some logs:
[    2.230194] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[    3.556376] virtio-pci 0000:00:05.0: irq 24 for MSI/MSI-X

The issue could reproduce easily, using one virtio-serial with 31
ports, and this is the default port num.
I think it's not necessary to reproduce with multiple controllers.

>
>                 Amit
>
> >
> > - use local disk/ram disk as virtio-blk-pci disk, the performance
> > downgrade is still obvious.
> >
> >
> > Could anyone give some help about this issue?
> >
> > Feng Li <lifeng1519@...il.com> 于2018年10月1日周一 下午10:58写道:
> > >
> > > Hi Dave,
> > > My comments are in-line.
> > >
> > > Dr. David Alan Gilbert <dgilbert@...hat.com> 于2018年10月1日周一 下午7:41写道:
> > > >
> > > > * Feng Li (lifeng1519@...il.com) wrote:
> > > > > Hi,
> > > > > I found an obvious performance downgrade when virtio-console combined
> > > > > with virtio-pci-blk.
> > > > >
> > > > > This phenomenon exists in nearly all Qemu versions and all Linux
> > > > > (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> > > > >
> > > > > This is a disk cmd:
> > > > > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> > > > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> > > > >
> > > > > If I add "-device
> > > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> > > > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> > > > >
> > > > > In VM, if I rmmod virtio-console, the performance will back to normal.
> > > > >
> > > > > Any idea about this issue?
> > > > >
> > > > > I don't know this is a qemu issue or kernel issue.
> > > >
> > > > It sounds odd;  can you provide more details on:
> > > >   a) The benchmark you're using.
> > > I'm using fio, the config is:
> > > [global]
> > > ioengine=libaio
> > > iodepth=128
> > > runtime=120
> > > time_based
> > > direct=1
> > >
> > > [randread]
> > > stonewall
> > > bs=4k
> > > filename=/dev/vdb
> > > rw=randread
> > >
> > > >   b) the host and the guest config (number of cpus etc)
> > > The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
> > > --enable-kvm -cpu host -smp 8
> > > or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
> > > host -smp 8
> > >
> > > The result is the same.
> > >
> > > >   c) Why are you running it with iscsi back to the same host - why not
> > > >      just simplify the test back to a simple file?
> > > >
> > >
> > > Because my ISCSI target could supply a high IOPS performance.
> > > If using a slow disk, the performance downgrade would be not so obvious.
> > > It's easy to be seen, you could try it.
> > >
> > >
> > > > Dave
> > > >
> > > > >
> > > > > Thanks in advance.
> > > > > --
> > > > > Thanks and Best Regards,
> > > > > Alex
> > > > >
> > > > --
> > > > Dr. David Alan Gilbert / dgilbert@...hat.com / Manchester, UK
> > >
> > >
> > >
> > > --
> > > Thanks and Best Regards,
> > > Feng Li(Alex)
> >
> >
> >
> > --
> > Thanks and Best Regards,
> > Feng Li(Alex)
>
>                 Amit
> --
> http://amitshah.net/



-- 
Thanks and Best Regards,
Feng Li(Alex)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ