lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACGkMEuYcR21_k0hyisWzTVHG4+a3Y=ym101Z5P8TSWyNkHWxA@mail.gmail.com>
Date: Tue, 9 May 2023 11:14:30 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Wenliang Wang <wangwenliang.1995@...edance.com>, davem@...emloft.net, 
	edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com, 
	zhengqi.arch@...edance.com, willemdebruijn.kernel@...il.com, 
	virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org, 
	linux-kernel@...r.kernel.org, xuanzhuo@...ux.alibaba.com
Subject: Re: [PATCH v4] virtio_net: suppress cpu stall when free_unused_bufs

On Mon, May 8, 2023 at 2:47 PM Michael S. Tsirkin <mst@...hat.com> wrote:
>
> On Mon, May 08, 2023 at 02:13:42PM +0800, Jason Wang wrote:
> > On Mon, May 8, 2023 at 2:08 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> > >
> > > On Mon, May 08, 2023 at 11:12:03AM +0800, Jason Wang wrote:
> > > >
> > > > 在 2023/5/7 21:34, Michael S. Tsirkin 写道:
> > > > > On Fri, May 05, 2023 at 11:28:25AM +0800, Jason Wang wrote:
> > > > > > On Thu, May 4, 2023 at 10:27 AM Wenliang Wang
> > > > > > <wangwenliang.1995@...edance.com> wrote:
> > > > > > > For multi-queue and large ring-size use case, the following error
> > > > > > > occurred when free_unused_bufs:
> > > > > > > rcu: INFO: rcu_sched self-detected stall on CPU.
> > > > > > >
> > > > > > > Fixes: 986a4f4d452d ("virtio_net: multiqueue support")
> > > > > > > Signed-off-by: Wenliang Wang <wangwenliang.1995@...edance.com>
> > > > > > > ---
> > > > > > > v2:
> > > > > > > -add need_resched check.
> > > > > > > -apply same logic to sq.
> > > > > > > v3:
> > > > > > > -use cond_resched instead.
> > > > > > > v4:
> > > > > > > -add fixes tag
> > > > > > > ---
> > > > > > >   drivers/net/virtio_net.c | 2 ++
> > > > > > >   1 file changed, 2 insertions(+)
> > > > > > >
> > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > > index 8d8038538fc4..a12ae26db0e2 100644
> > > > > > > --- a/drivers/net/virtio_net.c
> > > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > > @@ -3560,12 +3560,14 @@ static void free_unused_bufs(struct virtnet_info *vi)
> > > > > > >                  struct virtqueue *vq = vi->sq[i].vq;
> > > > > > >                  while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > > > > > >                          virtnet_sq_free_unused_buf(vq, buf);
> > > > > > > +               cond_resched();
> > > > > > Does this really address the case when the virtqueue is very large?
> > > > > >
> > > > > > Thanks
> > > > >
> > > > > it does in that a very large queue is still just 64k in size.
> > > > > we might however have 64k of these queues.
> > > >
> > > >
> > > > Ok, but we have other similar loops especially the refill, I think we may
> > > > need cond_resched() there as well.
> > > >
> > > > Thanks
> > > >
> > >
> > > Refill is already per vq isn't it?
> >
> > Not for the refill_work().
> >
> > Thanks
>
> Good point, refill_work probably needs cond_resched, too.
> And I guess virtnet_open?

Yes, let me draft a patch.

Thanks

>
>
> > >
> > >
> > > > >
> > > > > > >          }
> > > > > > >
> > > > > > >          for (i = 0; i < vi->max_queue_pairs; i++) {
> > > > > > >                  struct virtqueue *vq = vi->rq[i].vq;
> > > > > > >                  while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > > > > > >                          virtnet_rq_free_unused_buf(vq, buf);
> > > > > > > +               cond_resched();
> > > > > > >          }
> > > > > > >   }
> > > > > > >
> > > > > > > --
> > > > > > > 2.20.1
> > > > > > >
> > >
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ