[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <TY2PR06MB34248F29ED36A5DBB4FE0E2E8551A@TY2PR06MB3424.apcprd06.prod.outlook.com>
Date: Fri, 9 Jun 2023 00:42:22 +0000
From: Angus Chen <angus.chen@...uarmicro.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: "jasowang@...hat.com" <jasowang@...hat.com>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from
add_config
> -----Original Message-----
> From: Michael S. Tsirkin <mst@...hat.com>
> Sent: Friday, June 9, 2023 3:45 AM
> To: Angus Chen <angus.chen@...uarmicro.com>
> Cc: jasowang@...hat.com; virtualization@...ts.linux-foundation.org;
> linux-kernel@...r.kernel.org
> Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from
> add_config
>
> On Thu, Jun 08, 2023 at 05:01:24PM +0800, Angus Chen wrote:
> > When add virtio_pci vdpa device,check the vqs number of device cap
> > and max_vq_pairs from add_config.
> > Simply starting from failing if the provisioned #qp is not
> > equal to the one that hardware has.
> >
> > Signed-off-by: Angus Chen <angus.chen@...uarmicro.com>
>
> I am not sure about this one. How does userspace know
> which values are legal?
Maybe we can print device cap of device in dev_err?
>
> If there's no way then maybe we should just cap the value
> to what device can support but otherwise keep the device
> working.
We I use max_vqs pair to test vp_vdpa,it doesn't work as expect.
And there is no any hint of this.
>
> > ---
> > v1: Use max_vqs from add_config
> > v2: Just return fail if max_vqs from add_config is not same as device
> > cap. Suggested by jason.
> >
> > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++-------------
> > 1 file changed, 21 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
> b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > index 281287fae89f..c1fb6963da12 100644
> > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct
> vdpa_mgmt_dev *v_mdev, const char *name,
> > u64 device_features;
> > int ret, i;
> >
> > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > - dev, &vp_vdpa_ops, 1, 1, name, false);
> > -
> > - if (IS_ERR(vp_vdpa)) {
> > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n");
> > - return PTR_ERR(vp_vdpa);
> > + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> > + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs /
> 2)) {
> > + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x
> which device has\n",
> > + add_config->net.max_vq_pairs*2,
> v_mdev->max_supported_vqs);
> > + return -EINVAL;
> > + }
> > }
> >
> > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > -
> > - vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > - vp_vdpa->queues = vp_modern_get_num_queues(mdev);
> > - vp_vdpa->mdev = mdev;
> > -
> > device_features = vp_modern_get_features(mdev);
> > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) {
> > if (add_config->device_features & ~device_features) {
> > - ret = -EINVAL;
> > dev_err(&pdev->dev, "Try to provision features "
> > "that are not supported by the device: "
> > "device_features 0x%llx provisioned 0x%llx\n",
> > device_features, add_config->device_features);
> > - goto err;
> > + return -EINVAL;
> > }
> > device_features = add_config->device_features;
> > }
> > +
> > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > + dev, &vp_vdpa_ops, 1, 1, name, false);
> > +
> > + if (IS_ERR(vp_vdpa)) {
> > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n");
> > + return PTR_ERR(vp_vdpa);
> > + }
> > +
> > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > +
> > + vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > + vp_vdpa->queues = v_mdev->max_supported_vqs;
> > + vp_vdpa->mdev = mdev;
> > vp_vdpa->device_features = device_features;
> >
> > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev);
> > --
> > 2.25.1
Powered by blists - more mailing lists