[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEvjgyxs3HX_ZzUbMticntqnUxDQJMrr2MqTBwuRB7jCdw@mail.gmail.com>
Date: Wed, 7 Sep 2022 14:53:19 +0800
From: Jason Wang <jasowang@...hat.com>
To: Eli Cohen <elic@...dia.com>
Cc: "mst@...hat.com" <mst@...hat.com>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] vdpa: conditionally fill max max queue pair for stats
On Wed, Sep 7, 2022 at 2:11 PM Eli Cohen <elic@...dia.com> wrote:
>
> > From: Jason Wang <jasowang@...hat.com>
> > Sent: Wednesday, 7 September 2022 9:01
> > To: mst@...hat.com; jasowang@...hat.com; Eli Cohen <elic@...dia.com>;
> > virtualization@...ts.linux-foundation.org; linux-kernel@...r.kernel.org
> > Subject: [PATCH] vdpa: conditionally fill max max queue pair for stats
> >
> > For the device without multiqueue feature, we will read 0 as
> > max_virtqueue_pairs from the config.
> If this is the case for other vdpa vendor drivers, shouldn't we fix it there? After all,
> config->max_virtqueue_pairs should always show valid values.
Not for the case when the device doesn't offer MQ. According to the
spec, the max_virtqueue_pairs doesn't exist in this case.
>
> > So if we fill
> > VDPA_ATTR_DEV_NET_CFG_MAX_VQP with the value we read from the
> > config
> > we will confuse the user.
> >
> > Fixing this by only filling the value when multiqueue is offered by
> > the device so userspace can assume 1 when the attr is not provided.
> >
> > Fixes: 13b00b135665c("vdpa: Add support for querying vendor statistics")
> > Cc: Eli Cohen <elic@...dia.com>
> > Signed-off-by: Jason Wang <jasowang@...hat.com>
> > ---
> > drivers/vdpa/vdpa.c | 9 ++++-----
> > 1 file changed, 4 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> > index c06c02704461..bc328197263f 100644
> > --- a/drivers/vdpa/vdpa.c
> > +++ b/drivers/vdpa/vdpa.c
> > @@ -894,7 +894,6 @@ static int vdpa_fill_stats_rec(struct vdpa_device
> > *vdev, struct sk_buff *msg,
> > {
> > struct virtio_net_config config = {};
> > u64 features;
> > - u16 max_vqp;
> > u8 status;
> > int err;
> >
> > @@ -905,15 +904,15 @@ static int vdpa_fill_stats_rec(struct vdpa_device
> > *vdev, struct sk_buff *msg,
> > }
> > vdpa_get_config_unlocked(vdev, 0, &config, sizeof(config));
> >
> > - max_vqp = __virtio16_to_cpu(true, config.max_virtqueue_pairs);
> > - if (nla_put_u16(msg, VDPA_ATTR_DEV_NET_CFG_MAX_VQP,
> > max_vqp))
> > - return -EMSGSIZE;
> > -
> > features = vdev->config->get_driver_features(vdev);
> > if (nla_put_u64_64bit(msg,
> > VDPA_ATTR_DEV_NEGOTIATED_FEATURES,
> > features, VDPA_ATTR_PAD))
> > return -EMSGSIZE;
> >
> > + err = vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
> > + if (err)
> > + return err;
> > +
>
> So that means that you can't read statistics when MQ is not supported. Is this worth sacrificing?
vdpa_dev_net_mq_config_fill() will return 0 in the case of !MQ, so it
should still work.
Thanks
>
> > if (nla_put_u32(msg, VDPA_ATTR_DEV_QUEUE_INDEX, index))
> > return -EMSGSIZE;
> >
> > --
> > 2.25.1
>
Powered by blists - more mailing lists