[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <0460F92A-3DF6-4F7A-903B-6434555577CC@linux.alibaba.com>
Date: Wed, 25 Dec 2019 23:20:07 +0800
From: "Liu, Jiang" <gerry@...ux.alibaba.com>
To: Jason Wang <jasowang@...hat.com>
Cc: Zha Bin <zhabin@...ux.alibaba.com>, linux-kernel@...r.kernel.org,
mst@...hat.com, slp@...hat.com, virtio-dev@...ts.oasis-open.org,
jing2.liu@...el.com, chao.p.peng@...el.com
Subject: Re: [PATCH v1 2/2] virtio-mmio: add features for virtio-mmio
specification version 3
> On Dec 25, 2019, at 6:20 PM, Jason Wang <jasowang@...hat.com> wrote:
>
>
> On 2019/12/25 上午10:50, Zha Bin wrote:
>> From: Liu Jiang <gerry@...ux.alibaba.com>
>>
>> Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of using
>> virtio over mmio devices as a lightweight machine model for modern
>> cloud. The standard virtio over MMIO transport layer only supports one
>> legacy interrupt, which is much heavier than virtio over PCI transport
>> layer using MSI. Legacy interrupt has long work path and causes specific
>> VMExits in following cases, which would considerably slow down the
>> performance:
>>
>> 1) read interrupt status register
>> 2) update interrupt status register
>> 3) write IOAPIC EOI register
>>
>> We proposed to update virtio over MMIO to version 3[1] to add the
>> following new features and enhance the performance.
>>
>> 1) Support Message Signaled Interrupt(MSI), which increases the
>> interrupt performance for virtio multi-queue devices
>> 2) Support per-queue doorbell, so the guest kernel may directly write
>> to the doorbells provided by virtio devices.
>>
>> The following is the network tcp_rr performance testing report, tested
>> with virtio-pci device, vanilla virtio-mmio device and patched
>> virtio-mmio device (run test 3 times for each case):
>>
>> netperf -t TCP_RR -H 192.168.1.36 -l 30 -- -r 32,1024
>>
>> Virtio-PCI Virtio-MMIO Virtio-MMIO(MSI)
>> trans/s 9536 6939 9500
>> trans/s 9734 7029 9749
>> trans/s 9894 7095 9318
>>
>> [1] https://lkml.org/lkml/2019/12/20/113
>
>
> Thanks for the patch. Two questions after a quick glance:
>
> 1) In PCI we choose to support MSI-X instead of MSI for having extra flexibility like alias, independent data and address (e.g for affinity) . Any reason for not start from MSI-X? E.g having MSI-X table and PBA (both of which looks pretty independent).
Hi Jason,
Thanks for reviewing patches on Christmas Day:)
The PCI MSI-x has several advantages over PCI MSI, mainly
1) support 2048 vectors, much more than 32 vectors supported by MSI.
2) dedicated address/data for each vector,
3) per vector mask/pending bits.
The proposed MMIO MSI extension supports both 1) and 2), but the extension doesn’t support 3) because
we noticed that the Linux virtio subsystem doesn’t really make use of interrupt masking/unmasking.
On the other hand, we want to simplify VMM implementations as simple as possible, and mimicking the PCI MSI-x
will cause some complexity to VMM implementations.
> 2) It's better to split notify_multiplexer out of MSI support to ease the reviewers (apply to spec patch as well)
Great suggestion, we will try to split the patch.
Thanks,
Gerry
>
> Thanks
Powered by blists - more mailing lists