[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <bcb03e95-d8b9-6e19-5b0e-0119d3f43d6d@redhat.com>
Date: Mon, 13 Jul 2020 16:13:35 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>,
Zhu Lingshan <lingshan.zhu@...el.com>
Cc: alex.williamson@...hat.com, pbonzini@...hat.com,
sean.j.christopherson@...el.com, wanpengli@...cent.com,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
netdev@...r.kernel.org, dan.daly@...el.com
Subject: Re: [PATCH 2/7] kvm/vfio: detect assigned device via irqbypass
manager
On 2020/7/13 上午5:06, Michael S. Tsirkin wrote:
> On Sun, Jul 12, 2020 at 10:49:21PM +0800, Zhu Lingshan wrote:
>> We used to detect assigned device via VFIO manipulated device
>> conters. This is less flexible consider VFIO is not the only
>> interface for assigned device. vDPA devices has dedicated
>> backed hardware as well. So this patch tries to detect
>> the assigned device via irqbypass manager.
>>
>> We will increase/decrease the assigned device counter in kvm/x86.
>> Both vDPA and VFIO would go through this code path.
>>
>> This code path only affect x86 for now.
>>
>> Signed-off-by: Zhu Lingshan <lingshan.zhu@...el.com>
>
> I think it's best to leave VFIO alone. Add appropriate APIs for VDPA,
> worry about converting existing users later.
Just to make sure I understand, did you mean:
1) introduce another bridge for vDPA
or
2) only detect vDPA via bypass manager? (we can leave VFIO code as is,
then the assigned device counter may increase/decrease twice if VFIO use
irq bypass manager which should be no harm).
Thanks
>
>> ---
>> arch/x86/kvm/x86.c | 10 ++++++++--
>> virt/kvm/vfio.c | 2 --
>> 2 files changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 00c88c2..20c07d3 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -10624,11 +10624,17 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
>> {
>> struct kvm_kernel_irqfd *irqfd =
>> container_of(cons, struct kvm_kernel_irqfd, consumer);
>> + int ret;
>>
>> irqfd->producer = prod;
>> + kvm_arch_start_assignment(irqfd->kvm);
>> + ret = kvm_x86_ops.update_pi_irte(irqfd->kvm,
>> + prod->irq, irqfd->gsi, 1);
>> +
>> + if (ret)
>> + kvm_arch_end_assignment(irqfd->kvm);
>>
>> - return kvm_x86_ops.update_pi_irte(irqfd->kvm,
>> - prod->irq, irqfd->gsi, 1);
>> + return ret;
>> }
>>
>> void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
>> diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
>> index 8fcbc50..111da52 100644
>> --- a/virt/kvm/vfio.c
>> +++ b/virt/kvm/vfio.c
>> @@ -226,7 +226,6 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg)
>> list_add_tail(&kvg->node, &kv->group_list);
>> kvg->vfio_group = vfio_group;
>>
>> - kvm_arch_start_assignment(dev->kvm);
>>
>> mutex_unlock(&kv->lock);
>>
>> @@ -254,7 +253,6 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg)
>> continue;
>>
>> list_del(&kvg->node);
>> - kvm_arch_end_assignment(dev->kvm);
>> #ifdef CONFIG_SPAPR_TCE_IOMMU
>> kvm_spapr_tce_release_vfio_group(dev->kvm,
>> kvg->vfio_group);
>> --
>> 1.8.3.1
Powered by blists - more mailing lists