lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 Feb 2020 20:18:54 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     virtio-dev@...ts.oasis-open.org,
        Zha Bin <zhabin@...ux.alibaba.com>, slp@...hat.com,
        "Liu, Jing2" <jing2.liu@...ux.intel.com>,
        linux-kernel@...r.kernel.org, qemu-devel@...gnu.org,
        chao.p.peng@...ux.intel.com, gerry@...ux.alibaba.com
Subject: Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt
 feature support


On 2020/2/11 下午8:08, Michael S. Tsirkin wrote:
> On Tue, Feb 11, 2020 at 08:04:24PM +0800, Jason Wang wrote:
>> On 2020/2/11 下午7:58, Michael S. Tsirkin wrote:
>>> On Tue, Feb 11, 2020 at 03:40:23PM +0800, Jason Wang wrote:
>>>> On 2020/2/11 下午2:02, Liu, Jing2 wrote:
>>>>> On 2/11/2020 12:02 PM, Jason Wang wrote:
>>>>>> On 2020/2/11 上午11:35, Liu, Jing2 wrote:
>>>>>>> On 2/11/2020 11:17 AM, Jason Wang wrote:
>>>>>>>> On 2020/2/10 下午5:05, Zha Bin wrote:
>>>>>>>>> From: Liu Jiang<gerry@...ux.alibaba.com>
>>>>>>>>>
>>>>>>>>> Userspace VMMs (e.g. Qemu microvm, Firecracker) take
>>>>>>>>> advantage of using
>>>>>>>>> virtio over mmio devices as a lightweight machine model for modern
>>>>>>>>> cloud. The standard virtio over MMIO transport layer
>>>>>>>>> only supports one
>>>>>>>>> legacy interrupt, which is much heavier than virtio over
>>>>>>>>> PCI transport
>>>>>>>>> layer using MSI. Legacy interrupt has long work path and
>>>>>>>>> causes specific
>>>>>>>>> VMExits in following cases, which would considerably slow down the
>>>>>>>>> performance:
>>>>>>>>>
>>>>>>>>> 1) read interrupt status register
>>>>>>>>> 2) update interrupt status register
>>>>>>>>> 3) write IOAPIC EOI register
>>>>>>>>>
>>>>>>>>> We proposed to add MSI support for virtio over MMIO via new feature
>>>>>>>>> bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.
>>>>>>>>>
>>>>>>>>> With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
>>>>>>>>> uses msi_sharing[1] to indicate the event and vector mapping.
>>>>>>>>> Bit 1 is 0: device uses non-sharing and fixed vector per
>>>>>>>>> event mapping.
>>>>>>>>> Bit 1 is 1: device uses sharing mode and dynamic mapping.
>>>>>>>> I believe dynamic mapping should cover the case of fixed vector?
>>>>>>>>
>>>>>>> Actually this bit*aims*  for msi sharing or msi non-sharing.
>>>>>>>
>>>>>>> It means, when msi sharing bit is 1, device doesn't want vector
>>>>>>> per queue
>>>>>>>
>>>>>>> (it wants msi vector sharing as name) and doesn't want a high
>>>>>>> interrupt rate.
>>>>>>>
>>>>>>> So driver turns to !per_vq_vectors and has to do dynamical mapping.
>>>>>>>
>>>>>>> So they are opposite not superset.
>>>>>>>
>>>>>>> Thanks!
>>>>>>>
>>>>>>> Jing
>>>>>> I think you need add more comments on the command.
>>>>>>
>>>>>> E.g if I want to map vector 0 to queue 1, how do I need to do?
>>>>>>
>>>>>> write(1, queue_sel);
>>>>>> write(0, vector_sel);
>>>>> That's true. Besides, two commands are used for msi sharing mode,
>>>>>
>>>>> VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
>>>>>
>>>>> "To set up the event and vector mapping for MSI sharing mode, driver
>>>>> SHOULD write a valid MsiVecSel followed by
>>>>> VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command to
>>>>> map the configuration change/selected queue events respectively.  " (See
>>>>> spec patch 5/5)
>>>>>
>>>>> So if driver detects the msi sharing mode, when it does setup vq, writes
>>>>> the queue_sel (this already exists in setup vq), vector sel and then
>>>>> MAP_QUEUE command to do the queue event mapping.
>>>>>
>>>> So actually the per vq msix could be done through this. I don't get why you
>>>> need to introduce MSI_SHARING_MASK which is the charge of driver instead of
>>>> device. The interrupt rate should have no direct relationship with whether
>>>> it has been shared or not.
>>>>
>>>> Btw, you introduce mask/unmask without pending, how to deal with the lost
>>>> interrupt during the masking then?
>>> pending can be an internal device register. as long as device
>>> does not lose interrupts while masked, all's well.
>>
>> You meant raise the interrupt during unmask automatically?
>>
>
> yes - that's also what pci does.
>
> the guest visible pending bit is partially implemented in qemu
> as per pci spec but it's unused.


Ok.


>
>>> There's value is being able to say "this queue sends no
>>> interrupts do not bother checking used notification area".
>>> so we need way to say that. So I guess an enable interrupts
>>> register might have some value...
>>> But besides that, it's enough to have mask/unmask/address/data
>>> per vq.
>>
>> Just to check, do you mean "per vector" here?
>>
>> Thanks
>>
> No, per VQ. An indirection VQ -> vector -> address/data isn't
> necessary for PCI either, but that ship has sailed.


Yes, it can work but it may bring extra effort when you want to mask a 
vector which is was shared by a lot of queues.

Thanks

>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ