[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEs4wUh0TydcXSMR2GdBSTk+H-Tkbhn53BywEeiK9cA9Gg@mail.gmail.com>
Date: Fri, 28 Jul 2023 09:42:00 +0800
From: Jason Wang <jasowang@...hat.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: Gavin Li <gavinl@...dia.com>, mst@...hat.com, xuanzhuo@...ux.alibaba.com,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
jiri@...dia.com, dtatulea@...dia.com, gavi@...dia.com,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
Heng Qi <hengqi@...ux.alibaba.com>
Subject: Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt
coalesce command
On Thu, Jul 27, 2023 at 9:28 PM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote:
> > Add interrupt_coalesce config in send_queue and receive_queue to cache user
> > config.
> >
> > Send per virtqueue interrupt moderation config to underlying device in
> > order to have more efficient interrupt moderation and cpu utilization of
> > guest VM.
> >
> > Additionally, address all the VQs when updating the global configuration,
> > as now the individual VQs configuration can diverge from the global
> > configuration.
> >
> > Signed-off-by: Gavin Li <gavinl@...dia.com>
> > Reviewed-by: Dragos Tatulea <dtatulea@...dia.com>
> > Reviewed-by: Jiri Pirko <jiri@...dia.com>
> > Acked-by: Michael S. Tsirkin <mst@...hat.com>
>
> FTR, this patch is significantly different from the version previously
> acked/reviewed, I'm unsure if all the reviewers are ok with the new
> one.
Good point, and I plan to review this no later than next Monday and
offer my ack if necessary. Please hold this series now.
Thanks
>
> [...]
>
> > static int virtnet_set_coalesce(struct net_device *dev,
> > struct ethtool_coalesce *ec,
> > struct kernel_ethtool_coalesce *kernel_coal,
> > struct netlink_ext_ack *extack)
> > {
> > struct virtnet_info *vi = netdev_priv(dev);
> > - int ret, i, napi_weight;
> > + int ret, queue_number, napi_weight;
> > bool update_napi = false;
> >
> > /* Can't change NAPI weight if the link is up */
> > napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0;
> > - if (napi_weight ^ vi->sq[0].napi.weight) {
> > - if (dev->flags & IFF_UP)
> > - return -EBUSY;
> > - else
> > - update_napi = true;
> > + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) {
> > + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight,
> > + vi->sq[queue_number].napi.weight,
> > + &update_napi);
> > + if (ret)
> > + return ret;
> > +
> > + if (update_napi) {
> > + /* All queues that belong to [queue_number, queue_count] will be
> > + * updated for the sake of simplicity, which might not be necessary
>
> It looks like the comment above still refers to the old code. Should
> be:
> [queue_number, vi->max_queue_pairs]
>
> Otherwise LGTM, thanks!
>
> Paolo
>
Powered by blists - more mailing lists