lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Aug 2017 10:30:15 -0400
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Koichiro Den <den@...ipeden.com>, Jason Wang <jasowang@...hat.com>,
        virtualization@...ts.linux-foundation.org,
        Network Development <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] virtio-net: invoke zerocopy callback on xmit
 path if no tx napi

On Tue, Aug 29, 2017 at 3:35 PM, Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
> On Fri, Aug 25, 2017 at 9:03 PM, Willem de Bruijn
> <willemdebruijn.kernel@...il.com> wrote:
>> On Fri, Aug 25, 2017 at 7:32 PM, Michael S. Tsirkin <mst@...hat.com> wrote:
>>> On Fri, Aug 25, 2017 at 06:44:36PM -0400, Willem de Bruijn wrote:
>>>> >> >> > We don't enable network watchdog on virtio but we could and maybe
>>>> >> >> > should.
>>>> >> >>
>>>> >> >> Can you elaborate?
>>>> >> >
>>>> >> > The issue is that holding onto buffers for very long times makes guests
>>>> >> > think they are stuck. This is funamentally because from guest point of
>>>> >> > view this is a NIC, so it is supposed to transmit things out in
>>>> >> > a timely manner. If host backs the virtual NIC by something that is not
>>>> >> > a NIC, with traffic shaping etc introducing unbounded latencies,
>>>> >> > guest will be confused.
>>>> >>
>>>> >> That assumes that guests are fragile in this regard. A linux guest
>>>> >> does not make such assumptions.
>>>> >
>>>> > Yes it does. Examples above:
>>>> >         > > - a single slow flow can occupy the whole ring, you will not
>>>> >         > >   be able to make any new buffers available for the fast flow
>>>>
>>>> Oh, right. Though those are due to vring_desc pool exhaustion
>>>> rather than an upper bound on latency of any single packet.
>>>>
>>>> Limiting the number of zerocopy packets in flight to some fraction
>>>> of the ring ensures that fast flows can always grab a slot.
>>>> Running
>>>> out of ubuf_info slots reverts to copy, so indirectly does this. But
>>>> I read it correclty the zerocopy pool may be equal to or larger than
>>>> the descriptor pool. Should we refine the zcopy_used test
>>>>
>>>>     (nvq->upend_idx + 1) % UIO_MAXIOV != nvq->done_idx
>>>>
>>>> to also return false if the number of outstanding ubuf_info is greater
>>>> than, say, vq->num >> 1?
>>>
>>>
>>> We'll need to think about where to put the threshold, but I think it's
>>> a good idea.
>>>
>>> Maybe even a fixed number, e.g. max(vq->num >> 1, X) to limit host
>>> resources.
>>>
>>> In a sense it still means once you run out of slots zcopt gets disabled possibly permanently.
>>>
>>> Need to experiment with some numbers.
>>
>> I can take a stab with two flows, one delayed in a deep host qdisc
>> queue. See how this change affects the other flow and also how
>> sensitive that is to the chosen threshold value.
>
> Incomplete results at this stage, but I do see this correlation between
> flows. It occurs even while not running out of zerocopy descriptors,
> which I cannot yet explain.
>
> Running two threads in a guest, each with a udp socket, each
> sending up to 100 datagrams, or until EAGAIN, every msec.
>
> Sender A sends 1B datagrams.
> Sender B sends VHOST_GOODCOPY_LEN, which is enough
> to trigger zcopy_used in vhost net.
>
> A local receive process on the host receives both flows. To avoid
> a deep copy when looping the packet onto the receive path,
> changed skb_orphan_frags_rx to always return false (gross hack).
>
> The flow with the larger packets is redirected through netem on ifb0:
>
>   modprobe ifb
>   ip link set dev ifb0 up
>   tc qdisc add dev ifb0 root netem limit $LIMIT rate 1MBit
>
>   tc qdisc add dev tap0 ingress
>   tc filter add dev tap0 parent ffff: protocol ip \
>       u32 match ip dport 8000 0xffff \
>       action mirred egress redirect dev ifb0
>
> For 10 second run, packet count with various ifb0 queue lengths $LIMIT:
>
> no filter
>   rx.A: ~840,000
>   rx.B: ~840,000
>
> limit 1
>   rx.A: ~500,000
>   rx.B: ~3100
>   ifb0: 3273 sent, 371141 dropped
>
> limit 100
>   rx.A: ~9000
>   rx.B: ~4200
>   ifb0: 4630 sent, 1491 dropped
>
> limit 1000
>   rx.A: ~6800
>   rx.B: ~4200
>   ifb0: 4651 sent, 0 dropped
>
> Sender B is always correctly rate limited to 1 MBps or less. With a
> short queue, it ends up dropping a lot and sending even less.
>
> When a queue builds up for sender B, sender A throughput is strongly
> correlated with queue length. With queue length 1, it can send almost
> at unthrottled speed. But even at limit 100 its throughput is on the
> same order as sender B.
>
> What is surprising to me is that this happens even though the number
> of ubuf_info in use at limit 100 is around 100 at all times. In other words,
> it does not exhaust the pool.
>
> When forcing zcopy_used to be false for all packets, this effect of
> sender A throughput being correlated with sender B does not happen.
>
> no filter
>   rx.A: ~850,000
>   rx.B: ~850,000
>
> limit 100
>   rx.A: ~850,000
>   rx.B: ~4200
>   ifb0: 4518 sent, 876182 dropped
>
> Also relevant is that with zerocopy, the sender processes back off
> and report the same count as the receiver. Without zerocopy,
> both senders send at full speed, even if only 4200 packets from flow
> B arrive at the receiver.
>
> This is with the default virtio_net driver, so without napi-tx.
>
> It appears that the zerocopy notifications are pausing the guest.
> Will look at that now.

It was indeed as simple as that. With 256 descriptors, queuing even
a hundred or so packets causes the guest to stall the device as soon
as the qdisc is installed.

Adding this check

+                       in_use = nvq->upend_idx - nvq->done_idx;
+                       if (nvq->upend_idx < nvq->done_idx)
+                               in_use += UIO_MAXIOV;
+
+                       if (in_use > (vq->num >> 2))
+                               zcopy_used = false;

Has the desired behavior of reverting zerocopy requests to copying.

Without this change, the result is, as previously reported, throughput
dropping to hundreds of packets per second on both flows.

With the change, pps as observed for a few seconds at handle_tx is

zerocopy=165 copy=168435
zerocopy=0 copy=168500
zerocopy=65 copy=168535

Both flows continue to send at more or less normal rate, with only
sender B observing massive drops at the netem.

With the queue removed the rate reverts to

zerocopy=58878 copy=110239
zerocopy=58833 copy=110207

This is not a 50/50 split, which implies that some packets from the large
packet flow are still converted to copying. Without the change the rate
without queue was 80k zerocopy vs 80k copy, so this choice of
(vq->num >> 2) appears too conservative.

However, testing with (vq->num >> 1) was not as effective at mitigating
stalls. I did not save that data, unfortunately. Can run more tests on fine
tuning this variable, if the idea sounds good.

I also compiled qemu with 1024 descriptors in the tx ring instead of 256.
In that case the test

                if (unlikely(vhost_exceeds_maxpend(net)))
                        break;

which is

        return (nvq->upend_idx + vq->num - VHOST_MAX_PEND) % UIO_MAXIOV
                == nvq->done_idx;

Is hit before my proposed change, again stalling the device instead of
reverting to copying.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ