[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <TY2PR06MB34247A17ADD347D439EF84DA8526A@TY2PR06MB3424.apcprd06.prod.outlook.com>
Date: Mon, 26 Jun 2023 02:42:28 +0000
From: Angus Chen <angus.chen@...uarmicro.com>
To: Jason Wang <jasowang@...hat.com>
CC: "mst@...hat.com" <mst@...hat.com>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from
add_config
Hi,jason.
> -----Original Message-----
> From: Jason Wang <jasowang@...hat.com>
> Sent: Monday, June 26, 2023 10:30 AM
> To: Angus Chen <angus.chen@...uarmicro.com>
> Cc: mst@...hat.com; virtualization@...ts.linux-foundation.org;
> linux-kernel@...r.kernel.org
> Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from
> add_config
>
> On Thu, Jun 8, 2023 at 5:02 PM Angus Chen <angus.chen@...uarmicro.com>
> wrote:
> >
> > When add virtio_pci vdpa device,check the vqs number of device cap
> > and max_vq_pairs from add_config.
> > Simply starting from failing if the provisioned #qp is not
> > equal to the one that hardware has.
> >
> > Signed-off-by: Angus Chen <angus.chen@...uarmicro.com>
> > ---
> > v1: Use max_vqs from add_config
> > v2: Just return fail if max_vqs from add_config is not same as device
> > cap. Suggested by jason.
> >
> > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++-------------
> > 1 file changed, 21 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
> b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > index 281287fae89f..c1fb6963da12 100644
> > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct
> vdpa_mgmt_dev *v_mdev, const char *name,
> > u64 device_features;
> > int ret, i;
> >
> > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > - dev, &vp_vdpa_ops, 1, 1, name,
> false);
> > -
> > - if (IS_ERR(vp_vdpa)) {
> > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA
> structure\n");
> > - return PTR_ERR(vp_vdpa);
> > + if (add_config->mask &
> BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> > + if (add_config->net.max_vq_pairs !=
> (v_mdev->max_supported_vqs / 2)) {
> > + dev_err(&pdev->dev, "max vqs 0x%x should be
> equal to 0x%x which device has\n",
> > + add_config->net.max_vq_pairs*2,
> v_mdev->max_supported_vqs);
> > + return -EINVAL;
> > + }
> > }
> >
> > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > -
> > - vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > - vp_vdpa->queues = vp_modern_get_num_queues(mdev);
> > - vp_vdpa->mdev = mdev;
> > -
> > device_features = vp_modern_get_features(mdev);
> > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) {
> > if (add_config->device_features & ~device_features) {
> > - ret = -EINVAL;
> > dev_err(&pdev->dev, "Try to provision features
> "
> > "that are not supported by the device:
> "
> > "device_features 0x%llx provisioned
> 0x%llx\n",
> > device_features,
> add_config->device_features);
> > - goto err;
> > + return -EINVAL;
> > }
> > device_features = add_config->device_features;
> > }
> > +
> > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > + dev, &vp_vdpa_ops, 1, 1, name,
> false);
> > +
> > + if (IS_ERR(vp_vdpa)) {
> > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA
> structure\n");
> > + return PTR_ERR(vp_vdpa);
> > + }
> > +
> > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > +
> > + vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > + vp_vdpa->queues = v_mdev->max_supported_vqs;
>
> Why bother with those changes?
>
> mgtdev->max_supported_vqs = vp_modern_get_num_queues(mdev);
max_supported_vqs will not be changed, so we can get max_supported_vqs from mgtdev->max_supported_vqs.
If we use vp_modern_get_num_queues(mdev),it will use tlp to communicate with device.
It just reduce some tlp .
>
> Thanks
>
>
> > + vp_vdpa->mdev = mdev;
> > vp_vdpa->device_features = device_features;
> >
> > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors,
> pdev);
> > --
> > 2.25.1
> >
Powered by blists - more mailing lists