lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <44209f3c-613c-3766-ca83-321b77b0f0dd@redhat.com>
Date:   Wed, 12 Feb 2020 17:06:52 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Liu, Jing2" <jing2.liu@...ux.intel.com>,
        Zha Bin <zhabin@...ux.alibaba.com>,
        linux-kernel@...r.kernel.org
Cc:     virtio-dev@...ts.oasis-open.org, slp@...hat.com, mst@...hat.com,
        qemu-devel@...gnu.org, chao.p.peng@...ux.intel.com,
        gerry@...ux.alibaba.com
Subject: Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt
 feature support


On 2020/2/12 上午11:54, Liu, Jing2 wrote:
>
>
> On 2/11/2020 3:40 PM, Jason Wang wrote:
>>
>> On 2020/2/11 下午2:02, Liu, Jing2 wrote:
>>>
>>>
>>> On 2/11/2020 12:02 PM, Jason Wang wrote:
>>>>
>>>> On 2020/2/11 上午11:35, Liu, Jing2 wrote:
>>>>>
>>>>> On 2/11/2020 11:17 AM, Jason Wang wrote:
>>>>>>
>>>>>> On 2020/2/10 下午5:05, Zha Bin wrote:
>>>>>>> From: Liu Jiang<gerry@...ux.alibaba.com>
>>>>>>>
>>>>>>> Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage 
>>>>>>> of using
>>>>>>> virtio over mmio devices as a lightweight machine model for modern
>>>>>>> cloud. The standard virtio over MMIO transport layer only 
>>>>>>> supports one
>>>>>>> legacy interrupt, which is much heavier than virtio over PCI 
>>>>>>> transport
>>>>>>> layer using MSI. Legacy interrupt has long work path and causes 
>>>>>>> specific
>>>>>>> VMExits in following cases, which would considerably slow down the
>>>>>>> performance:
>>>>>>>
>>>>>>> 1) read interrupt status register
>>>>>>> 2) update interrupt status register
>>>>>>> 3) write IOAPIC EOI register
>>>>>>>
>>>>>>> We proposed to add MSI support for virtio over MMIO via new feature
>>>>>>> bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.
>>>>>>>
>>>>>>> With the VIRTIO_F_MMIO_MSI feature bit supported, the 
>>>>>>> virtio-mmio MSI
>>>>>>> uses msi_sharing[1] to indicate the event and vector mapping.
>>>>>>> Bit 1 is 0: device uses non-sharing and fixed vector per event 
>>>>>>> mapping.
>>>>>>> Bit 1 is 1: device uses sharing mode and dynamic mapping.
>>>>>>
>>>>>>
>>>>>> I believe dynamic mapping should cover the case of fixed vector?
>>>>>>
>>>>> Actually this bit *aims* for msi sharing or msi non-sharing.
>>>>>
>>>>> It means, when msi sharing bit is 1, device doesn't want vector 
>>>>> per queue
>>>>>
>>>>> (it wants msi vector sharing as name) and doesn't want a high 
>>>>> interrupt rate.
>>>>>
>>>>> So driver turns to !per_vq_vectors and has to do dynamical mapping.
>>>>>
>>>>> So they are opposite not superset.
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Jing
>>>>
>>>>
>>>> I think you need add more comments on the command.
>>>>
>>>> E.g if I want to map vector 0 to queue 1, how do I need to do?
>>>>
>>>> write(1, queue_sel);
>>>> write(0, vector_sel);
>>>
>>> That's true. Besides, two commands are used for msi sharing mode,
>>>
>>> VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
>>>
>>> "To set up the event and vector mapping for MSI sharing mode, driver 
>>> SHOULD write a valid MsiVecSel followed by 
>>> VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command 
>>> to map the configuration change/selected queue events respectively.  
>>> " (See spec patch 5/5)
>>>
>>> So if driver detects the msi sharing mode, when it does setup vq, 
>>> writes the queue_sel (this already exists in setup vq), vector sel 
>>> and then MAP_QUEUE command to do the queue event mapping.
>>>
>>
>> So actually the per vq msix could be done through this. 
>
> Right, per vq msix can also be mapped by the 2 commands if we want.
>
> The current design benefits for those devices requesting per vq msi 
> that driver
>
> doesn't have to map during setup each queue,
>
> since we define the relationship by default.
>

Well since you've defined the dynamic mapping, having some "default" 
mapping won't help to reduce the complexity but increase it.


>
>> I don't get why you need to introduce MSI_SHARING_MASK which is the 
>> charge of driver instead of device. 
>
> MSI_SHARING_MASK is just for identifying the msi_sharing bit in 
> readl(MsiState) (0x0c4). The device tells whether it wants msi_sharing.
>
> MsiState register: R
>
> le32 {
>     msi_enabled : 1;
>     msi_sharing: 1;
>     reserved : 30;
> };
>

The question is why device want such information.


>
>> The interrupt rate should have no direct relationship with whether it 
>> has been shared or not.
>
>>
>> Btw, you introduce mask/unmask without pending, how to deal with the 
>> lost interrupt during the masking then?
>>
>>
>>> For msi non-sharing mode, no special action is needed because we 
>>> make the rule of per_vq_vector and fixed relationship.
>>>
>>> Correct me if this is not that clear for spec/code comments.
>>>
>>
>> The ABI is not as straightforward as PCI did. Why not just reuse the 
>> PCI layout?
>>
>> E.g having
>>
>> queue_sel
>> queue_msix_vector
>> msix_config
>>
>> for configuring map between msi vector and queues/config
>
> Thanks for the advice. :)
>
> Actually when looking into pci, the queue_msix_vector/msix_config is 
> the msi vector index, which is the same as the mmio register MsiVecSel 
> (0x0d0).
>
> So we don't introduce two extra registers for mapping even in sharing 
> mode.
>
> What do you think?
>

I'm not sure I get the point, but I still prefer the separate vector_sel 
from queue_msix_vector.

Btw, Michael propose per vq registers which could also work.

Thanks


>
>>
>> Then
>>
>> vector_sel
>> address
>> data
>> pending
>> mask
>> unmask
>>
>> for configuring msi table?
>
> PCI-like msix table is not introduced to device and instead simply use 
> commands to tell the mask/configure/enable.
>
> Thanks!
>
> Jing
>
>>
>> Thanks
>>
>>
>>> Thanks!
>>>
>>> Jing
>>>
>>>
>>>>
>>>> ?
>>>>
>>>> Thanks
>>>>
>>>>
>>>>>
>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>> --------------------------------------------------------------------- 
>>>>>>
>>>>>> To unsubscribe, e-mail: virtio-dev-unsubscribe@...ts.oasis-open.org
>>>>>> For additional commands, e-mail: 
>>>>>> virtio-dev-help@...ts.oasis-open.org
>>>>>>
>>>>>
>>>>
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ