lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210413153023-mutt-send-email-mst@kernel.org>
Date:   Tue, 13 Apr 2021 15:34:38 -0400
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc:     Network Development <netdev@...r.kernel.org>,
        Jason Wang <jasowang@...hat.com>,
        virtualization@...ts.linux-foundation.org,
        David Miller <davem@...emloft.net>
Subject: Re: [PATCH net-next v3 5/5] virtio-net: keep tx interrupts disabled
 unless kick

On Tue, Apr 13, 2021 at 10:27:16AM -0400, Willem de Bruijn wrote:
> On Tue, Apr 13, 2021 at 1:06 AM Michael S. Tsirkin <mst@...hat.com> wrote:
> >
> > On Mon, Apr 24, 2017 at 01:49:30PM -0400, Willem de Bruijn wrote:
> > > From: Willem de Bruijn <willemb@...gle.com>
> > >
> > > Tx napi mode increases the rate of transmit interrupts. Suppress some
> > > by masking interrupts while more packets are expected. The interrupts
> > > will be reenabled before the last packet is sent.
> > >
> > > This optimization reduces the througput drop with tx napi for
> > > unidirectional flows such as UDP_STREAM that do not benefit from
> > > cleaning tx completions in the the receive napi handler.
> > >
> > > Signed-off-by: Willem de Bruijn <willemb@...gle.com>
> > > ---
> > >  drivers/net/virtio_net.c | 3 +++
> > >  1 file changed, 3 insertions(+)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 9dd978f34c1f..003143835766 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -1200,6 +1200,9 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >       /* Free up any pending old buffers before queueing new ones. */
> > >       free_old_xmit_skbs(sq);
> > >
> > > +     if (use_napi && kick)
> > > +             virtqueue_enable_cb_delayed(sq->vq);
> > > +
> > >       /* timestamp packet in software */
> > >       skb_tx_timestamp(skb);
> >
> >
> > I have been poking at this code today and I noticed that is
> > actually does enable cb where the commit log says masking interrupts.
> > I think the reason is that with even index previously disable cb
> > actually did nothing while virtqueue_enable_cb_delayed pushed
> > the event index out some more.
> > And this likely explains why it does not work well for packed,
> > where virtqueue_enable_cb_delayed is same as virtqueue_enable_cb.
> >
> > Right? Or did I miss something?
> 
> This was definitely based on the split queue with event index handling.
> 
> When you say does not work well for packed, you mean that with packed
> mode we see the consequences of the race condition when accessing vq
> without holding __netif_tx_lock, in a way that I did not notice with
> split queue with event index, right?

I mean curretly packed does not seem to show same performance gains as
a micro-benchmark. Could be due to enabling interrupts more aggressively
there.

> Thanks for looking into this and proposing fixes for this issue and the
> known other spurious tx interrupt issue.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ