[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a40a361-536f-c1a6-8a95-09df80014dc5@intel.com>
Date: Fri, 8 Jul 2022 14:54:26 +0800
From: "Zhu, Lingshan" <lingshan.zhu@...el.com>
To: Jason Wang <jasowang@...hat.com>, mst@...hat.com
Cc: virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
parav@...dia.com, xieyongji@...edance.com, gautam.dawar@....com
Subject: Re: [PATCH V3 2/6] vDPA/ifcvf: support userspace to query features
and MQ of a management device
On 7/4/2022 12:43 PM, Jason Wang wrote:
>
> 在 2022/7/1 21:28, Zhu Lingshan 写道:
>> Adapting to current netlink interfaces, this commit allows userspace
>> to query feature bits and MQ capability of a management device.
>>
>> Signed-off-by: Zhu Lingshan <lingshan.zhu@...el.com>
>> ---
>> drivers/vdpa/ifcvf/ifcvf_base.c | 12 ++++++++++++
>> drivers/vdpa/ifcvf/ifcvf_base.h | 1 +
>> drivers/vdpa/ifcvf/ifcvf_main.c | 3 +++
>> 3 files changed, 16 insertions(+)
>>
>> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c
>> b/drivers/vdpa/ifcvf/ifcvf_base.c
>> index fb957b57941e..7c5f1cc93ad9 100644
>> --- a/drivers/vdpa/ifcvf/ifcvf_base.c
>> +++ b/drivers/vdpa/ifcvf/ifcvf_base.c
>> @@ -346,6 +346,18 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16
>> qid, u16 num)
>> return 0;
>> }
>> +u16 ifcvf_get_max_vq_pairs(struct ifcvf_hw *hw)
>> +{
>> + struct virtio_net_config __iomem *config;
>> + u16 val, mq;
>> +
>> + config = hw->dev_cfg;
>> + val = vp_ioread16((__le16 __iomem *)&config->max_virtqueue_pairs);
>> + mq = le16_to_cpu((__force __le16)val);
>> +
>> + return mq;
>> +}
>> +
>> static int ifcvf_hw_enable(struct ifcvf_hw *hw)
>> {
>> struct virtio_pci_common_cfg __iomem *cfg;
>> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h
>> b/drivers/vdpa/ifcvf/ifcvf_base.h
>> index f5563f665cc6..d54a1bed212e 100644
>> --- a/drivers/vdpa/ifcvf/ifcvf_base.h
>> +++ b/drivers/vdpa/ifcvf/ifcvf_base.h
>> @@ -130,6 +130,7 @@ u64 ifcvf_get_hw_features(struct ifcvf_hw *hw);
>> int ifcvf_verify_min_features(struct ifcvf_hw *hw, u64 features);
>> u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid);
>> int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num);
>> +u16 ifcvf_get_max_vq_pairs(struct ifcvf_hw *hw);
>> struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw);
>> int ifcvf_probed_virtio_net(struct ifcvf_hw *hw);
>> u32 ifcvf_get_config_size(struct ifcvf_hw *hw);
>> diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c
>> b/drivers/vdpa/ifcvf/ifcvf_main.c
>> index 0a5670729412..3ff7096d30f1 100644
>> --- a/drivers/vdpa/ifcvf/ifcvf_main.c
>> +++ b/drivers/vdpa/ifcvf/ifcvf_main.c
>> @@ -791,6 +791,9 @@ static int ifcvf_vdpa_dev_add(struct
>> vdpa_mgmt_dev *mdev, const char *name,
>> vf->hw_features = ifcvf_get_hw_features(vf);
>> vf->config_size = ifcvf_get_config_size(vf);
>> + ifcvf_mgmt_dev->mdev.max_supported_vqs =
>> ifcvf_get_max_vq_pairs(vf);
>
>
> Do we want #qps or #queues?
>
> FYI, vp_vdpa did:
>
> drivers/vdpa/virtio_pci/vp_vdpa.c: mgtdev->max_supported_vqs =
> vp_modern_get_num_queues(mdev);
Oh Yes, it should be the queues, will fix this
Thanks
>
> Thanks
>
>
>> + ifcvf_mgmt_dev->mdev.supported_features = vf->hw_features;
>> +
>> adapter->vdpa.mdev = &ifcvf_mgmt_dev->mdev;
>> ret = _vdpa_register_device(&adapter->vdpa, vf->nr_vring);
>> if (ret) {
>
Powered by blists - more mailing lists