[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bbf2f017-0c24-45c2-1231-1b0febe82ad4@redhat.com>
Date: Thu, 13 Sep 2018 17:05:18 +0800
From: Jason Wang <jasowang@...hat.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: Network Development <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>, caleb.raitto@...il.com,
"Michael S. Tsirkin" <mst@...hat.com>,
"Jon Olson (Google Drive)" <jonolson@...gle.com>,
Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH net-next] virtio_net: ethtool tx napi configuration
On 2018年09月12日 21:43, Willem de Bruijn wrote:
> On Tue, Sep 11, 2018 at 11:35 PM Jason Wang <jasowang@...hat.com> wrote:
>>
>>
>> On 2018年09月11日 09:14, Willem de Bruijn wrote:
>>>>>> I cook a fixup, and it looks works in my setup:
>>>>>>
>>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>>>> index b320b6b14749..9181c3f2f832 100644
>>>>>> --- a/drivers/net/virtio_net.c
>>>>>> +++ b/drivers/net/virtio_net.c
>>>>>> @@ -2204,10 +2204,17 @@ static int virtnet_set_coalesce(struct
>>>>>> net_device *dev,
>>>>>> return -EINVAL;
>>>>>>
>>>>>> if (napi_weight ^ vi->sq[0].napi.weight) {
>>>>>> - if (dev->flags & IFF_UP)
>>>>>> - return -EBUSY;
>>>>>> - for (i = 0; i < vi->max_queue_pairs; i++)
>>>>>> + for (i = 0; i < vi->max_queue_pairs; i++) {
>>>>>> + struct netdev_queue *txq =
>>>>>> + netdev_get_tx_queue(vi->dev, i);
>>>>>> +
>>>>>> + virtnet_napi_tx_disable(&vi->sq[i].napi);
>>>>>> + __netif_tx_lock_bh(txq);
>>>>>> vi->sq[i].napi.weight = napi_weight;
>>>>>> + __netif_tx_unlock_bh(txq);
>>>>>> + virtnet_napi_tx_enable(vi, vi->sq[i].vq,
>>>>>> + &vi->sq[i].napi);
>>>>>> + }
>>>>>> }
>>>>>>
>>>>>> return 0;
>>>>> Thanks! It passes my simple stress test, too. Which consists of two
>>>>> concurrent loops, one toggling the ethtool option, another running
>>>>> TCP_RR.
>>>>>
>>>>>> The only left case is the speculative tx polling in RX NAPI. I think we
>>>>>> don't need to care in this case since it was not a must for correctness.
>>>>> As long as the txq lock is held that will be a noop, anyway. The other
>>>>> concurrent action is skb_xmit_done. It looks correct to me, but need
>>>>> to think about it a bit. The tricky transition is coming out of napi without
>>>>> having >= 2 + MAX_SKB_FRAGS clean descriptors. If the queue is
>>>>> stopped it may deadlock transmission in no-napi mode.
>>>> Yes, maybe we can enable tx queue when napi weight is zero in
>>>> virtnet_poll_tx().
>>> Yes, that precaution should resolve that edge case.
>>>
>> I've done a stress test and it passes. The test contains:
>>
>> - vm with 2 queues
>> - a bash script to enable and disable tx napi
>> - two netperf UDP_STREAM sessions to send small packets
> Great. That matches my results. Do you want to send the v2?
Some mails were blocked so I do not receive some replies in time. So I
post a V2 (but as you've pointed out, it's buggy).
Thanks
Powered by blists - more mailing lists