[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200211065319-mutt-send-email-mst@kernel.org>
Date: Tue, 11 Feb 2020 06:58:33 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: "Liu, Jing2" <jing2.liu@...ux.intel.com>,
Zha Bin <zhabin@...ux.alibaba.com>,
linux-kernel@...r.kernel.org, virtio-dev@...ts.oasis-open.org,
slp@...hat.com, qemu-devel@...gnu.org, chao.p.peng@...ux.intel.com,
gerry@...ux.alibaba.com
Subject: Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt
feature support
On Tue, Feb 11, 2020 at 03:40:23PM +0800, Jason Wang wrote:
>
> On 2020/2/11 下午2:02, Liu, Jing2 wrote:
> >
> >
> > On 2/11/2020 12:02 PM, Jason Wang wrote:
> > >
> > > On 2020/2/11 上午11:35, Liu, Jing2 wrote:
> > > >
> > > > On 2/11/2020 11:17 AM, Jason Wang wrote:
> > > > >
> > > > > On 2020/2/10 下午5:05, Zha Bin wrote:
> > > > > > From: Liu Jiang<gerry@...ux.alibaba.com>
> > > > > >
> > > > > > Userspace VMMs (e.g. Qemu microvm, Firecracker) take
> > > > > > advantage of using
> > > > > > virtio over mmio devices as a lightweight machine model for modern
> > > > > > cloud. The standard virtio over MMIO transport layer
> > > > > > only supports one
> > > > > > legacy interrupt, which is much heavier than virtio over
> > > > > > PCI transport
> > > > > > layer using MSI. Legacy interrupt has long work path and
> > > > > > causes specific
> > > > > > VMExits in following cases, which would considerably slow down the
> > > > > > performance:
> > > > > >
> > > > > > 1) read interrupt status register
> > > > > > 2) update interrupt status register
> > > > > > 3) write IOAPIC EOI register
> > > > > >
> > > > > > We proposed to add MSI support for virtio over MMIO via new feature
> > > > > > bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.
> > > > > >
> > > > > > With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
> > > > > > uses msi_sharing[1] to indicate the event and vector mapping.
> > > > > > Bit 1 is 0: device uses non-sharing and fixed vector per
> > > > > > event mapping.
> > > > > > Bit 1 is 1: device uses sharing mode and dynamic mapping.
> > > > >
> > > > >
> > > > > I believe dynamic mapping should cover the case of fixed vector?
> > > > >
> > > > Actually this bit *aims* for msi sharing or msi non-sharing.
> > > >
> > > > It means, when msi sharing bit is 1, device doesn't want vector
> > > > per queue
> > > >
> > > > (it wants msi vector sharing as name) and doesn't want a high
> > > > interrupt rate.
> > > >
> > > > So driver turns to !per_vq_vectors and has to do dynamical mapping.
> > > >
> > > > So they are opposite not superset.
> > > >
> > > > Thanks!
> > > >
> > > > Jing
> > >
> > >
> > > I think you need add more comments on the command.
> > >
> > > E.g if I want to map vector 0 to queue 1, how do I need to do?
> > >
> > > write(1, queue_sel);
> > > write(0, vector_sel);
> >
> > That's true. Besides, two commands are used for msi sharing mode,
> >
> > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
> >
> > "To set up the event and vector mapping for MSI sharing mode, driver
> > SHOULD write a valid MsiVecSel followed by
> > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command to
> > map the configuration change/selected queue events respectively. " (See
> > spec patch 5/5)
> >
> > So if driver detects the msi sharing mode, when it does setup vq, writes
> > the queue_sel (this already exists in setup vq), vector sel and then
> > MAP_QUEUE command to do the queue event mapping.
> >
>
> So actually the per vq msix could be done through this. I don't get why you
> need to introduce MSI_SHARING_MASK which is the charge of driver instead of
> device. The interrupt rate should have no direct relationship with whether
> it has been shared or not.
>
> Btw, you introduce mask/unmask without pending, how to deal with the lost
> interrupt during the masking then?
pending can be an internal device register. as long as device
does not lose interrupts while masked, all's well.
There's value is being able to say "this queue sends no
interrupts do not bother checking used notification area".
so we need way to say that. So I guess an enable interrupts
register might have some value...
But besides that, it's enough to have mask/unmask/address/data
per vq.
>
> > For msi non-sharing mode, no special action is needed because we make
> > the rule of per_vq_vector and fixed relationship.
> >
> > Correct me if this is not that clear for spec/code comments.
> >
>
> The ABI is not as straightforward as PCI did. Why not just reuse the PCI
> layout?
>
> E.g having
>
> queue_sel
> queue_msix_vector
> msix_config
>
> for configuring map between msi vector and queues/config
>
> Then
>
> vector_sel
> address
> data
> pending
> mask
> unmask
>
> for configuring msi table?
>
> Thanks
>
>
> > Thanks!
> >
> > Jing
> >
> >
> > >
> > > ?
> > >
> > > Thanks
> > >
> > >
> > > >
> > > >
> > > > > Thanks
> > > > >
> > > > >
> > > > >
> > > > > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@...ts.oasis-open.org
> > > > > For additional commands, e-mail: virtio-dev-help@...ts.oasis-open.org
> > > > >
> > > >
> > >
Powered by blists - more mailing lists