lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-JQ5vL8fhLwtz9yY9B4NZR3Tf0kmZHAi-9ht_-Bz0n4Sw@mail.gmail.com>
Date:   Tue, 5 Sep 2017 16:09:19 +0200
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     "Michael S. Tsirkin" <mst@...hat.com>,
        Koichiro Den <den@...ipeden.com>,
        virtualization@...ts.linux-foundation.org,
        Network Development <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] virtio-net: invoke zerocopy callback on xmit
 path if no tx napi

On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang <jasowang@...hat.com> wrote:
>
>
> On 2017年09月02日 00:17, Willem de Bruijn wrote:
>>>>>
>>>>> This is not a 50/50 split, which impliesTw that some packets from the
>>>>> large
>>>>> packet flow are still converted to copying. Without the change the rate
>>>>> without queue was 80k zerocopy vs 80k copy, so this choice of
>>>>> (vq->num >> 2) appears too conservative.
>>>>>
>>>>> However, testing with (vq->num >> 1) was not as effective at mitigating
>>>>> stalls. I did not save that data, unfortunately. Can run more tests on
>>>>> fine
>>>>> tuning this variable, if the idea sounds good.
>>>>
>>>>
>>>> Looks like there're still two cases were left:
>>>
>>> To be clear, this patch is not intended to fix all issues. It is a small
>>> improvement to avoid HoL blocking due to queued zerocopy skbs.
>
>
> Right, just want to see if there's anything left.
>
>>>
>>> The trade-off is that reverting to copying in these cases increases
>>> cycle cost. I think that that is a trade-off worth making compared to
>>> the alternative drop in throughput. It probably would be good to be
>>> able to measure this without kernel instrumentation: export
>>> counters similar to net->tx_zcopy_err and net->tx_packets (though
>>> without reset to zero, as in vhost_net_tx_packet).
>
>
> I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
>
>>>
>>>> 1) sndbuf is not INT_MAX
>>>
>>> You mean the case where the device stalls, later zerocopy notifications
>>> are queued, but these are never cleaned in free_old_xmit_skbs,
>>> because it requires a start_xmit and by now the (only) socket is out of
>>> descriptors?
>>
>> Typo, sorry. I meant out of sndbuf.
>
>
> I mean e.g for tun. If its sndbuf is smaller than e.g (vq->num >> 1) *
> $pkt_size and if all packet were held by some modules, limitation like
> vq->num >> 1 won't work since we hit sudbuf before it.

Good point.

>>
>>> A watchdog would help somewhat. With tx-napi, this case cannot occur,
>>> either, as free_old_xmit_skbs no longer depends on a call to start_xmit.
>>>
>>>> 2) tx napi is used for virtio-net
>>>
>>> I am not aware of any issue specific to the use of tx-napi?
>
>
> Might not be clear here, I mean e.g virtio_net (tx-napi) in guest +
> vhost_net (zerocopy) in host. In this case, even if we switch to datacopy if
> ubuf counts exceeds vq->num >> 1, we still complete tx buffers in order, tx
> interrupt could be delayed for indefinite time.

Copied buffers are completed immediately in handle_tx.

Do you mean when a process sends fewer packets than vq->num >> 1,
so that all are queued? Yes, then the latency is indeed that of the last
element leaving the qdisc.

>>>
>>>> 1) could be a corner case, and for 2) what your suggest here may not
>>>> solve
>>>> the issue since it still do in order completion.
>>>
>>> Somewhat tangential, but it might also help to break the in-order
>>> completion processing in vhost_zerocopy_signal_used. Complete
>>> all descriptors between done_idx and upend_idx. done_idx should
>>> then only be forward to the oldest still not-completed descriptor.
>>>
>>> In the test I ran, where the oldest descriptors are held in a queue and
>>> all newer ones are tail-dropped,
>
>
> Do you mean the descriptors were tail-dropped by vhost?

Tail-dropped by netem. The dropped items are completed out of
order by vhost before the held items.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ