[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d2571b6-0b95-53b3-6989-b4d801eeb623@redhat.com>
Date: Mon, 8 Jun 2020 17:43:58 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
rob.miller@...adcom.com, lingshan.zhu@...el.com,
eperezma@...hat.com, lulu@...hat.com, shahafs@...lanox.com,
hanand@...inx.com, mhabets@...arflare.com, gdawar@...inx.com,
saugatm@...inx.com, vmireyno@...vell.com,
zhangweining@...jie.com.cn, eli@...lanox.com
Subject: Re: [PATCH 5/6] vdpa: introduce virtio pci driver
On 2020/6/8 下午5:31, Michael S. Tsirkin wrote:
> On Mon, Jun 08, 2020 at 05:18:44PM +0800, Jason Wang wrote:
>> On 2020/6/8 下午2:32, Michael S. Tsirkin wrote:
>>> On Mon, Jun 08, 2020 at 11:32:31AM +0800, Jason Wang wrote:
>>>> On 2020/6/7 下午9:51, Michael S. Tsirkin wrote:
>>>>> On Fri, Jun 05, 2020 at 04:54:17PM +0800, Jason Wang wrote:
>>>>>> On 2020/6/2 下午3:08, Jason Wang wrote:
>>>>>>>>> +static const struct pci_device_id vp_vdpa_id_table[] = {
>>>>>>>>> + { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) },
>>>>>>>>> + { 0 }
>>>>>>>>> +};
>>>>>>>> This looks like it'll create a mess with either virtio pci
>>>>>>>> or vdpa being loaded at random. Maybe just don't specify
>>>>>>>> any IDs for now. Down the road we could get a
>>>>>>>> distinct vendor ID or a range of device IDs for this.
>>>>>>> Right, will do.
>>>>>>>
>>>>>>> Thanks
>>>>>> Rethink about this. If we don't specify any ID, the binding won't work.
>>>>> We can bind manually. It's not really for production anyway, so
>>>>> not a big deal imho.
>>>> I think you mean doing it via "new_id", right.
>>> I really meant driver_override. This is what people have been using
>>> with pci-stub for years now.
>>
>> Do you want me to implement "driver_overrid" in this series, or a NULL
>> id_table is sufficient?
>
> Doesn't the pci subsystem create driver_override for all devices
> on the pci bus?
Yes, I miss this.
>>>>>> How about using a dedicated subsystem vendor id for this?
>>>>>>
>>>>>> Thanks
>>>>> If virtio vendor id is used then standard driver is expected
>>>>> to bind, right? Maybe use a dedicated vendor id?
>>>> I meant something like:
>>>>
>>>> static const struct pci_device_id vp_vdpa_id_table[] = {
>>>> { PCI_DEVICE_SUB(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID,
>>>> VP_TEST_VENDOR_ID, VP_TEST_DEVICE_ID) },
>>>> { 0 }
>>>> };
>>>>
>>>> Thanks
>>>>
>>> Then regular virtio will still bind to it. It has
>>>
>>> drivers/virtio/virtio_pci_common.c: { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) },
>>>
>>>
>> IFCVF use this to avoid the binding to regular virtio device.
>
> Ow. Indeed:
>
> #define IFCVF_VENDOR_ID 0x1AF4
>
> Which is of course not an IFCVF vendor id, it's the Red Hat vendor ID.
>
> I missed that.
>
> Does it actually work if you bind a virtio driver to it?
It works.
> I'm guessing no otherwise they wouldn't need IFC driver, right?
>
Looking at the driver, they used a dedicated bar for dealing with
virtqueue state save/restore. It
>
>
>> Looking at
>> pci_match_one_device() it checks both subvendor and subdevice there.
>>
>> Thanks
>
> But IIUC there is no guarantee that driver with a specific subvendor
> matches in presence of a generic one.
> So either IFC or virtio pci can win, whichever binds first.
I'm not sure I get there. But I try manually bind IFCVF to qemu's
virtio-net-pci, and it fails.
Thanks
>
> I guess we need to blacklist IFC in virtio pci probe code. Ugh.
>
Powered by blists - more mailing lists