[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200729141554.GA47212@mtl-vdi-166.wap.labs.mlnx>
Date: Wed, 29 Jul 2020 17:15:54 +0300
From: Eli Cohen <eli@...lanox.com>
To: Jason Wang <jasowang@...hat.com>
Cc: Zhu Lingshan <lingshan.zhu@...el.com>, alex.williamson@...hat.com,
mst@...hat.com, pbonzini@...hat.com,
sean.j.christopherson@...el.com, wanpengli@...cent.com,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
kvm@...r.kernel.org, shahafs@...lanox.com, parav@...lanox.com
Subject: Re: [PATCH V4 4/6] vhost_vdpa: implement IRQ offloading in vhost_vdpa
OK, we have a mode of operation that does not require driver
intervention to manipulate the event queues so I think we're ok with
this design.
On Wed, Jul 29, 2020 at 06:19:52PM +0800, Jason Wang wrote:
>
> On 2020/7/29 下午5:55, Eli Cohen wrote:
> >On Wed, Jul 29, 2020 at 05:21:53PM +0800, Jason Wang wrote:
> >>On 2020/7/28 下午5:04, Eli Cohen wrote:
> >>>On Tue, Jul 28, 2020 at 12:24:03PM +0800, Zhu Lingshan wrote:
> >>>>+static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, int qid)
> >>>>+{
> >>>>+ struct vhost_virtqueue *vq = &v->vqs[qid];
> >>>>+ const struct vdpa_config_ops *ops = v->vdpa->config;
> >>>>+ struct vdpa_device *vdpa = v->vdpa;
> >>>>+ int ret, irq;
> >>>>+
> >>>>+ spin_lock(&vq->call_ctx.ctx_lock);
> >>>>+ irq = ops->get_vq_irq(vdpa, qid);
> >>>>+ if (!vq->call_ctx.ctx || irq == -EINVAL) {
> >>>>+ spin_unlock(&vq->call_ctx.ctx_lock);
> >>>>+ return;
> >>>>+ }
> >>>>+
> >>>If I understand correctly, this will cause these IRQs to be forwarded
> >>>directly to the VCPU, e.g. will be handled by the guest/qemu.
> >>
> >>Yes, if it can bypassed, the interrupt will be delivered to vCPU directly.
> >>
> >So, usually the network driver knows how to handle interrups for its
> >devices. I assume the virtio_net driver at the guest has some default
> >processing but what if the underlying hardware device (such as the case
> >of vdpa) needs to take some actions?
>
>
> Virtio splits the bus operations out of device operations. So did
> the driver.
>
> The virtio-net driver depends on a transport driver to talk to the
> real device. Usually PCI is used as the transport for the device. In
> this case virtio-pci driver is in charge of dealing with irq
> allocation/free/configuration and it needs to co-operate with
> platform specific irqchip (virtualized by KVM) to finish the work
> like irq acknowledge etc. E.g for x86, the irq offloading can only
> work when there's a hardware support of virtual irqchip (APICv) then
> all stuffs could be done without vmexits.
>
> So no vendor specific part since the device and transport are all standard.
>
>
> > Is there an option to do bounce the
> >interrupt back to the vendor specific driver in the host so it can take
> >these actions?
>
>
> Currently not, but even if we can do this, I'm afraid we will lose
> the performance advantage of irq bypassing.
>
>
> >
> >>>Does this mean that the host will not handle this interrupt? How does it
> >>>work in case on level triggered interrupts?
> >>
> >>There's no guarantee that the KVM arch code can make sure the irq
> >>bypass work for any type of irq. So if they the irq will still need
> >>to be handled by host first. This means we should keep the host
> >>interrupt handler as a slowpath (fallback).
> >>
> >>>In the case of ConnectX, I need to execute some code to acknowledge the
> >>>interrupt.
> >>
> >>This turns out to be hard for irq bypassing to work. Is it because
> >>the irq is shared or what kind of ack you need to do?
> >I have an EQ which is a queue for events comming from the hardware. This
> >EQ can created so it reports only completion events but I still need to
> >execute code that roughly tells the device that I saw these event
> >records and then arm it again so it can report more interrupts (e.g if
> >more packets are received or sent). This is device specific code.
>
>
> Any chance that the hardware can use MSI (which is not the case here)?
>
> Thanks
>
>
> >>Thanks
> >>
> >>
> >>>Can you explain how this should be done?
> >>>
>
Powered by blists - more mailing lists