lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c26072c1-b03e-6691-5a51-e115a8aa8c09@gmail.com>
Date:   Fri, 16 Feb 2018 09:19:02 -0800
From:   John Fastabend <john.fastabend@...il.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Jason Wang <jasowang@...hat.com>
Cc:     "Michael S. Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Daniel Borkmann <borkmann@...earbox.net>,
        "David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC net PATCH] virtio_net: disable XDP_REDIRECT in
 receive_mergeable() case

On 02/16/2018 07:41 AM, Jesper Dangaard Brouer wrote:
> On Fri, 16 Feb 2018 13:31:37 +0800
> Jason Wang <jasowang@...hat.com> wrote:
> 
>> On 2018年02月16日 06:43, Jesper Dangaard Brouer wrote:
>>> The virtio_net code have three different RX code-paths in receive_buf().
>>> Two of these code paths can handle XDP, but one of them is broken for
>>> at least XDP_REDIRECT.
>>>
>>> Function(1): receive_big() does not support XDP.
>>> Function(2): receive_small() support XDP fully and uses build_skb().
>>> Function(3): receive_mergeable() broken XDP_REDIRECT uses napi_alloc_skb().
>>>
>>> The simple explanation is that receive_mergeable() is broken because
>>> it uses napi_alloc_skb(), which violates XDP given XDP assumes packet
>>> header+data in single page and enough tail room for skb_shared_info.
>>>
>>> The longer explaination is that receive_mergeable() tries to
>>> work-around and satisfy these XDP requiresments e.g. by having a
>>> function xdp_linearize_page() that allocates and memcpy RX buffers
>>> around (in case packet is scattered across multiple rx buffers).  This
>>> does currently satisfy XDP_PASS, XDP_DROP and XDP_TX (but only because
>>> we have not implemented bpf_xdp_adjust_tail yet).
>>>
>>> The XDP_REDIRECT action combined with cpumap is broken, and cause hard
>>> to debug crashes.  The main issue is that the RX packet does not have
>>> the needed tail-room (SKB_DATA_ALIGN(skb_shared_info)), causing
>>> skb_shared_info to overlap the next packets head-room (in which cpumap
>>> stores info).
>>>
>>> Reproducing depend on the packet payload length and if RX-buffer size
>>> happened to have tail-room for skb_shared_info or not.  But to make
>>> this even harder to troubleshoot, the RX-buffer size is runtime
>>> dynamically change based on an Exponentially Weighted Moving Average
>>> (EWMA) over the packet length, when refilling RX rings.
>>>
>>> This patch only disable XDP_REDIRECT support in receive_mergeable()
>>> case, because it can cause a real crash.
>>>
>>> But IMHO we should NOT support XDP in receive_mergeable() at all,
>>> because the principles behind XDP are to gain speed by (1) code
>>> simplicity, (2) sacrificing memory and (3) where possible moving
>>> runtime checks to setup time.  These principles are clearly being
>>> violated in receive_mergeable(), that e.g. runtime track average
>>> buffer size to save memory consumption.  
>>
>> I agree to disable it for -net now. 
> 
> Okay... I'll send an official patch later.
> 
>> For net-next, we probably can do:
>>
>> - drop xdp_linearize_page() and do XDP through generic XDP helper
>>   after skb was built
> 
> I disagree strongly here - it makes no sense.
> 
> Why do you want to explicit fallback to Generic-XDP?
> (... then all the performance gain is gone!)
> And besides, a couple of function calls later, the generic XDP code
> will/can get invoked anyhow...
> 

Hi, Can we get EWMA to ensure for majority of cases we have the extra
head room? Seems we could just over-estimate the size by N-bytes. In
some cases we may under-estimate and then would need to fall back to
generic-xdp or otherwise growing the buffer which of course would be
painful and slow, but presumably would happen rarely.

I think it would be much better to keep this feature vs kill it and
make its configuration even more painful to get XDP working on virtio.

Disabling EWMA also seems reasonable to me.

> 
> Take a step back:
>  What is the reason/use-case for implementing XDP inside virtio_net?
> 
> From a DDoS/performance perspective XDP in virtio_net happens on the
> "wrong-side" as it is activated _inside_ the guest OS, which is too
> late for a DDoS filter, as the guest kick/switch overhead have already
> occurred.
> 

The hypervisor may not "know" how to detect DDOS if its specific to
the VMs domain. In these cases we aren't measuring pps but are looking
at cycles/packet. In this case dropping cycles/packet frees up CPU
for useful work. Here I expect we can see real CPU % drop in VM by
using XDP.

The other use case is once we have a fast path NIC to VM in kernel
we can expect, from you numbers below, 3+Mpps. I seem to remember
from Jason's netdevconf talk that he had some experimental numbers
that were even better. The other case is the hypervisor is not Linux
and is feeding packets even faster DPDK numbers, again from Jason's
slides, seemed to show 11+Mpps. XDP makes a lot of sense here IMO.

(those specific pps numbers I pulled from memory but the point is
feeding many Mpps into a VM should be doable)

The last thing is we may see hardware VFs emulating virtio at some
point. Then XDP would be needed. With the newer virtio spec under
development my impression is the hardware emulation piece is becoming
more of a focus. But, someone actually working on that could probably
provide a more informed comment.

> I do use XDP_DROP inside the guest (driver virtio_net), but just to
> perform what I can zoom-in benchmarking, for perf-record isolating the
> early RX code path in the guest.  (Using iptables "raw" table drop is
> almost as useful for that purpose).
> 

I suspect customers will eventually start using this in VMs for
real use cases once the infrastructure and capabilities mature and
kernel versions in deployed VMs catch up.

> 
> 	
> The XDP ndo_xdp_xmit in tuntap/tun.c (that you also implemented) is
> significantly more interesting.  As it allow us to skip large parts of
> the network stack and redirect from a physical device (ixgbe) into a
> guest device.  Ran a benchmark:
>  - 0.5 Mpps with normal code path into device with driver tun
>  - 3.7 Mpps with XDP_REDIRECT from ixgbe into same device

Yep also very interesting but a different use case. This is accelerating
the hypervisor vswitch. The above is accelerating the VM domain.

> 
> Plus, there are indications that 3.7Mpps is not the real limit, as
> guest CPU doing XDP_DROP is 75% idle... thus this is a likely a
> scheduling + queue size issue.
> 

The target I had in mind is about 6Mpps L2fwd tests.

Thanks,
John

> 
>> - disable EWMA when XDP is set and reserve enough tailroom.
>>
>>>
>>> Besides the described bug:
>>>
>>> Update(1): There is also a OOM leak in the XDP_REDIRECT code, which
>>> receive_small() is likely also affected by.
>>>
>>> Update(2): Also observed a guest crash when redirecting out an
>>> another virtio_net device, when device is down.  
>>
>> Will have a look at these issues. (Holiday in china now, so will do it 
>> after).
> 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ