lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Nov 2016 15:27:08 +0100
From:   Jesper Dangaard Brouer <>
To:     Thomas Graf <>
Cc:     Shrijeet Mukherjee <>,
        Alexei Starovoitov <>,
        Jakub Kicinski <>,
        John Fastabend <>,
        David Miller <>,,,,,, Roopa Prabhu <>,
        Nikolay Aleksandrov <>,
Subject: Re: [PATCH net-next RFC WIP] Patch for XDP support for virtio_net

On Sat, 29 Oct 2016 13:25:14 +0200
Thomas Graf <> wrote:

> On 10/28/16 at 08:51pm, Shrijeet Mukherjee wrote:
> > Generally agree, but SRIOV nics with multiple queues can end up in a bad
> > spot if each buffer was 4K right ? I see a specific page pool to be used
> > by queues which are enabled for XDP as the easiest to swing solution that
> > way the memory overhead can be restricted to enabled queues and shared
> > access issues can be restricted to skb's using that pool no ?

Yes, that is why that I've been arguing so strongly for having the
flexibility to attach a XDP program per RX queue, as this only change
the memory model for this one queue.

> Isn't this clearly a must anyway? I may be missing something
> fundamental here so please enlighten me :-)
> If we dedicate a page per packet, that could translate to 14M*4K worth
> of memory being mapped per second for just a 10G NIC under DoS attack.
> How can one protect such as system? Is the assumption that we can always
> drop such packets quickly enough before we start dropping randomly due
> to memory pressure? If a handshake is required to determine validity
> of a packet then that is going to be difficult.

Under DoS attacks you don't run out of memory, because a diverse set of
socket memory limits/accounting avoids that situation.  What does
happen is the maximum achievable PPS rate is directly dependent on the
time you spend on each packet.   This use of CPU resources (and
hitting mem-limits-safe-guards) push-back on the drivers speed to
process the RX ring.  In effect, packets are dropped in the NIC HW as
RX-ring queue is not emptied fast-enough.

Given you don't control what HW drops, the attacker will "successfully"
cause your good traffic to be among the dropped packets.

This is where XDP change the picture. If you can express (by eBPF) a
filter that can separate "bad" vs "good" traffic, then you can take
back control.  Almost like controlling what traffic the HW should drop.
Given the cost of XDP-eBPF filter + serving regular traffic does not
use all of your CPU resources, you have overcome the attack.

Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of

Powered by blists - more mailing lists