lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Nov 2016 18:28:34 -0700
From:   Shrijeet Mukherjee <shm@...ulusnetworks.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Thomas Graf <tgraf@...g.ch>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Jakub Kicinski <kubakici@...pl>,
        John Fastabend <john.fastabend@...il.com>,
        David Miller <davem@...emloft.net>, alexander.duyck@...il.com,
        mst@...hat.com, shrijeet@...il.com, tom@...bertland.com,
        netdev@...r.kernel.org, Roopa Prabhu <roopa@...ulusnetworks.com>,
        Nikolay Aleksandrov <nikolay@...ulusnetworks.com>
Subject: RE: [PATCH net-next RFC WIP] Patch for XDP support for virtio_net

> -----Original Message-----
> From: Jesper Dangaard Brouer [mailto:brouer@...hat.com]
> Sent: Wednesday, November 2, 2016 7:27 AM
> To: Thomas Graf <tgraf@...g.ch>
> Cc: Shrijeet Mukherjee <shm@...ulusnetworks.com>; Alexei Starovoitov
> <alexei.starovoitov@...il.com>; Jakub Kicinski <kubakici@...pl>; John
> Fastabend <john.fastabend@...il.com>; David Miller
> <davem@...emloft.net>; alexander.duyck@...il.com; mst@...hat.com;
> shrijeet@...il.com; tom@...bertland.com; netdev@...r.kernel.org;
> Roopa Prabhu <roopa@...ulusnetworks.com>; Nikolay Aleksandrov
> <nikolay@...ulusnetworks.com>; brouer@...hat.com
> Subject: Re: [PATCH net-next RFC WIP] Patch for XDP support for
virtio_net
>
> On Sat, 29 Oct 2016 13:25:14 +0200
> Thomas Graf <tgraf@...g.ch> wrote:
>
> > On 10/28/16 at 08:51pm, Shrijeet Mukherjee wrote:
> > > Generally agree, but SRIOV nics with multiple queues can end up in a
> > > bad spot if each buffer was 4K right ? I see a specific page pool to
> > > be used by queues which are enabled for XDP as the easiest to swing
> > > solution that way the memory overhead can be restricted to enabled
> > > queues and shared access issues can be restricted to skb's using
that
> pool no ?
>
> Yes, that is why that I've been arguing so strongly for having the
flexibility to
> attach a XDP program per RX queue, as this only change the memory model
> for this one queue.
>
>
> > Isn't this clearly a must anyway? I may be missing something
> > fundamental here so please enlighten me :-)
> >
> > If we dedicate a page per packet, that could translate to 14M*4K worth
> > of memory being mapped per second for just a 10G NIC under DoS attack.
> > How can one protect such as system? Is the assumption that we can
> > always drop such packets quickly enough before we start dropping
> > randomly due to memory pressure? If a handshake is required to
> > determine validity of a packet then that is going to be difficult.
>
> Under DoS attacks you don't run out of memory, because a diverse set of
> socket memory limits/accounting avoids that situation.  What does happen
> is the maximum achievable PPS rate is directly dependent on the
> time you spend on each packet.   This use of CPU resources (and
> hitting mem-limits-safe-guards) push-back on the drivers speed to
process
> the RX ring.  In effect, packets are dropped in the NIC HW as RX-ring
queue
> is not emptied fast-enough.
>
> Given you don't control what HW drops, the attacker will "successfully"
> cause your good traffic to be among the dropped packets.
>
> This is where XDP change the picture. If you can express (by eBPF) a
filter
> that can separate "bad" vs "good" traffic, then you can take back
control.
> Almost like controlling what traffic the HW should drop.
> Given the cost of XDP-eBPF filter + serving regular traffic does not use
all of
> your CPU resources, you have overcome the attack.
>
> --
Jesper,  John et al .. to make this a little concrete I am going to spin
up a v2 which has only bigbuffers mode enabled for xdp acceleration, all
other modes will reject the xdp ndo ..

Do we have agreement on that model ?

It will need that all vhost implementations will need to start with
mergeable buffers disabled to get xdp goodness, but that sounds like a
safe thing to do for now ..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ