[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <16eb4c0e-50b1-5c9a-1d01-ea6cd7d09398@redhat.com>
Date: Mon, 8 Mar 2021 15:29:53 +0800
From: Jason Wang <jasowang@...hat.com>
To: Yongji Xie <xieyongji@...edance.com>
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>,
Stefano Garzarella <sgarzare@...hat.com>,
Parav Pandit <parav@...dia.com>, Bob Liu <bob.liu@...cle.com>,
Christoph Hellwig <hch@...radead.org>,
Randy Dunlap <rdunlap@...radead.org>,
Matthew Wilcox <willy@...radead.org>, viro@...iv.linux.org.uk,
Jens Axboe <axboe@...nel.dk>, bcrl@...ck.org,
Jonathan Corbet <corbet@....net>,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
kvm@...r.kernel.org, linux-aio@...ck.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [RFC v4 10/11] vduse: Introduce a workqueue for irq injection
On 2021/3/8 3:16 下午, Yongji Xie wrote:
> On Mon, Mar 8, 2021 at 3:02 PM Jason Wang <jasowang@...hat.com> wrote:
>>
>> On 2021/3/8 12:50 下午, Yongji Xie wrote:
>>> On Mon, Mar 8, 2021 at 11:04 AM Jason Wang <jasowang@...hat.com> wrote:
>>>> On 2021/3/5 4:12 下午, Yongji Xie wrote:
>>>>> On Fri, Mar 5, 2021 at 3:37 PM Jason Wang <jasowang@...hat.com> wrote:
>>>>>> On 2021/3/5 3:27 下午, Yongji Xie wrote:
>>>>>>> On Fri, Mar 5, 2021 at 3:01 PM Jason Wang <jasowang@...hat.com> wrote:
>>>>>>>> On 2021/3/5 2:36 下午, Yongji Xie wrote:
>>>>>>>>> On Fri, Mar 5, 2021 at 11:42 AM Jason Wang <jasowang@...hat.com> wrote:
>>>>>>>>>> On 2021/3/5 11:30 上午, Yongji Xie wrote:
>>>>>>>>>>> On Fri, Mar 5, 2021 at 11:05 AM Jason Wang <jasowang@...hat.com> wrote:
>>>>>>>>>>>> On 2021/3/4 4:58 下午, Yongji Xie wrote:
>>>>>>>>>>>>> On Thu, Mar 4, 2021 at 2:59 PM Jason Wang <jasowang@...hat.com> wrote:
>>>>>>>>>>>>>> On 2021/2/23 7:50 下午, Xie Yongji wrote:
>>>>>>>>>>>>>>> This patch introduces a workqueue to support injecting
>>>>>>>>>>>>>>> virtqueue's interrupt asynchronously. This is mainly
>>>>>>>>>>>>>>> for performance considerations which makes sure the push()
>>>>>>>>>>>>>>> and pop() for used vring can be asynchronous.
>>>>>>>>>>>>>> Do you have pref numbers for this patch?
>>>>>>>>>>>>>>
>>>>>>>>>>>>> No, I can do some tests for it if needed.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Another problem is the VIRTIO_RING_F_EVENT_IDX feature will be useless
>>>>>>>>>>>>> if we call irq callback in ioctl context. Something like:
>>>>>>>>>>>>>
>>>>>>>>>>>>> virtqueue_push();
>>>>>>>>>>>>> virtio_notify();
>>>>>>>>>>>>> ioctl()
>>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>>> irq_cb()
>>>>>>>>>>>>> virtqueue_get_buf()
>>>>>>>>>>>>>
>>>>>>>>>>>>> The used vring is always empty each time we call virtqueue_push() in
>>>>>>>>>>>>> userspace. Not sure if it is what we expected.
>>>>>>>>>>>> I'm not sure I get the issue.
>>>>>>>>>>>>
>>>>>>>>>>>> THe used ring should be filled by virtqueue_push() which is done by
>>>>>>>>>>>> userspace before?
>>>>>>>>>>>>
>>>>>>>>>>> After userspace call virtqueue_push(), it always call virtio_notify()
>>>>>>>>>>> immediately. In traditional VM (vhost-vdpa) cases, virtio_notify()
>>>>>>>>>>> will inject an irq to VM and return, then vcpu thread will call
>>>>>>>>>>> interrupt handler. But in container (virtio-vdpa) cases,
>>>>>>>>>>> virtio_notify() will call interrupt handler directly. So it looks like
>>>>>>>>>>> we have to optimize the virtio-vdpa cases. But one problem is we don't
>>>>>>>>>>> know whether we are in the VM user case or container user case.
>>>>>>>>>> Yes, but I still don't get why used ring is empty after the ioctl()?
>>>>>>>>>> Used ring does not use bounce page so it should be visible to the kernel
>>>>>>>>>> driver. What did I miss :) ?
>>>>>>>>>>
>>>>>>>>> Sorry, I'm not saying the kernel can't see the correct used vring. I
>>>>>>>>> mean the kernel will consume the used vring in the ioctl context
>>>>>>>>> directly in the virtio-vdpa case. In userspace's view, that means
>>>>>>>>> virtqueue_push() is used vring's producer and virtio_notify() is used
>>>>>>>>> vring's consumer. They will be called one by one in one thread rather
>>>>>>>>> than different threads, which looks odd and has a bad effect on
>>>>>>>>> performance.
>>>>>>>> Yes, that's why we need a workqueue (WQ_UNBOUND you used). Or do you
>>>>>>>> want to squash this patch into patch 8?
>>>>>>>>
>>>>>>>> So I think we can see obvious difference when virtio-vdpa is used.
>>>>>>>>
>>>>>>> But it looks like we don't need this workqueue in vhost-vdpa cases.
>>>>>>> Any suggestions?
>>>>>> I haven't had a deep thought. But I feel we can solve this by using the
>>>>>> irq bypass manager (or something similar). Then we don't need it to be
>>>>>> relayed via workqueue and vdpa. But I'm not sure how hard it will be.
>>>>>>
>>>>> Or let vdpa bus drivers give us some information?
>>>> This kind of 'type' is proposed in the early RFC of vDPA series. One
>>>> issue is that at device level, we should not differ virtio from vhost,
>>>> so if we introduce that, it might encourge people to design a device
>>>> that is dedicated to vhost or virtio which might not be good.
>>>>
>>>> But we can re-visit this when necessary.
>>>>
>>> OK, I see. How about adding some information in ops.set_vq_cb()?
>>
>> I'm not sure I get this, maybe you can explain a little bit more?
>>
> For example, add an extra parameter for ops.set_vq_cb() to indicate
> whether this callback will trigger the interrupt handler directly.
Sounds intersting. I think it may work.
Thanks
>
> Thanks,
> Yongji
>
Powered by blists - more mailing lists