lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab603c53-f7f8-5e89-a7c6-0050a97abe7b@redhat.com>
Date:   Wed, 12 Sep 2018 11:35:26 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc:     Network Development <netdev@...r.kernel.org>,
        David Miller <davem@...emloft.net>, caleb.raitto@...il.com,
        "Michael S. Tsirkin" <mst@...hat.com>,
        "Jon Olson (Google Drive)" <jonolson@...gle.com>,
        Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH net-next] virtio_net: ethtool tx napi configuration



On 2018年09月11日 09:14, Willem de Bruijn wrote:
>>>> I cook a fixup, and it looks works in my setup:
>>>>
>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>> index b320b6b14749..9181c3f2f832 100644
>>>> --- a/drivers/net/virtio_net.c
>>>> +++ b/drivers/net/virtio_net.c
>>>> @@ -2204,10 +2204,17 @@ static int virtnet_set_coalesce(struct
>>>> net_device *dev,
>>>>                    return -EINVAL;
>>>>
>>>>            if (napi_weight ^ vi->sq[0].napi.weight) {
>>>> -               if (dev->flags & IFF_UP)
>>>> -                       return -EBUSY;
>>>> -               for (i = 0; i < vi->max_queue_pairs; i++)
>>>> +               for (i = 0; i < vi->max_queue_pairs; i++) {
>>>> +                       struct netdev_queue *txq =
>>>> +                              netdev_get_tx_queue(vi->dev, i);
>>>> +
>>>> + virtnet_napi_tx_disable(&vi->sq[i].napi);
>>>> +                       __netif_tx_lock_bh(txq);
>>>>                            vi->sq[i].napi.weight = napi_weight;
>>>> +                       __netif_tx_unlock_bh(txq);
>>>> +                       virtnet_napi_tx_enable(vi, vi->sq[i].vq,
>>>> + &vi->sq[i].napi);
>>>> +               }
>>>>            }
>>>>
>>>>            return 0;
>>> Thanks! It passes my simple stress test, too. Which consists of two
>>> concurrent loops, one toggling the ethtool option, another running
>>> TCP_RR.
>>>
>>>> The only left case is the speculative tx polling in RX NAPI. I think we
>>>> don't need to care in this case since it was not a must for correctness.
>>> As long as the txq lock is held that will be a noop, anyway. The other
>>> concurrent action is skb_xmit_done. It looks correct to me, but need
>>> to think about it a bit. The tricky transition is coming out of napi without
>>> having >= 2 + MAX_SKB_FRAGS clean descriptors. If the queue is
>>> stopped it may deadlock transmission in no-napi mode.
>> Yes, maybe we can enable tx queue when napi weight is zero in
>> virtnet_poll_tx().
> Yes, that precaution should resolve that edge case.
>

I've done a stress test and it passes. The test contains:

- vm with 2 queues
- a bash script to enable and disable tx napi
- two netperf UDP_STREAM sessions to send small packets

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ