[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b1d1ef3-d65e-08c2-5b65-32969bb5ecbc@redhat.com>
Date: Fri, 5 Jun 2020 16:54:17 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
rob.miller@...adcom.com, lingshan.zhu@...el.com,
eperezma@...hat.com, lulu@...hat.com, shahafs@...lanox.com,
hanand@...inx.com, mhabets@...arflare.com, gdawar@...inx.com,
saugatm@...inx.com, vmireyno@...vell.com,
zhangweining@...jie.com.cn, eli@...lanox.com
Subject: Re: [PATCH 5/6] vdpa: introduce virtio pci driver
On 2020/6/2 下午3:08, Jason Wang wrote:
>>
>>> +static const struct pci_device_id vp_vdpa_id_table[] = {
>>> + { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) },
>>> + { 0 }
>>> +};
>> This looks like it'll create a mess with either virtio pci
>> or vdpa being loaded at random. Maybe just don't specify
>> any IDs for now. Down the road we could get a
>> distinct vendor ID or a range of device IDs for this.
>
>
> Right, will do.
>
> Thanks
Rethink about this. If we don't specify any ID, the binding won't work.
How about using a dedicated subsystem vendor id for this?
Thanks
Powered by blists - more mailing lists