[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161103041145.arylaxxssbdylfwn@redhat.com>
Date: Thu, 3 Nov 2016 06:11:45 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Shrijeet Mukherjee <shm@...ulusnetworks.com>
Cc: Jesper Dangaard Brouer <brouer@...hat.com>,
Thomas Graf <tgraf@...g.ch>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Jakub Kicinski <kubakici@...pl>,
John Fastabend <john.fastabend@...il.com>,
David Miller <davem@...emloft.net>, alexander.duyck@...il.com,
shrijeet@...il.com, tom@...bertland.com, netdev@...r.kernel.org,
Roopa Prabhu <roopa@...ulusnetworks.com>,
Nikolay Aleksandrov <nikolay@...ulusnetworks.com>
Subject: Re: [PATCH net-next RFC WIP] Patch for XDP support for virtio_net
On Wed, Nov 02, 2016 at 06:28:34PM -0700, Shrijeet Mukherjee wrote:
> > -----Original Message-----
> > From: Jesper Dangaard Brouer [mailto:brouer@...hat.com]
> > Sent: Wednesday, November 2, 2016 7:27 AM
> > To: Thomas Graf <tgraf@...g.ch>
> > Cc: Shrijeet Mukherjee <shm@...ulusnetworks.com>; Alexei Starovoitov
> > <alexei.starovoitov@...il.com>; Jakub Kicinski <kubakici@...pl>; John
> > Fastabend <john.fastabend@...il.com>; David Miller
> > <davem@...emloft.net>; alexander.duyck@...il.com; mst@...hat.com;
> > shrijeet@...il.com; tom@...bertland.com; netdev@...r.kernel.org;
> > Roopa Prabhu <roopa@...ulusnetworks.com>; Nikolay Aleksandrov
> > <nikolay@...ulusnetworks.com>; brouer@...hat.com
> > Subject: Re: [PATCH net-next RFC WIP] Patch for XDP support for
> virtio_net
> >
> > On Sat, 29 Oct 2016 13:25:14 +0200
> > Thomas Graf <tgraf@...g.ch> wrote:
> >
> > > On 10/28/16 at 08:51pm, Shrijeet Mukherjee wrote:
> > > > Generally agree, but SRIOV nics with multiple queues can end up in a
> > > > bad spot if each buffer was 4K right ? I see a specific page pool to
> > > > be used by queues which are enabled for XDP as the easiest to swing
> > > > solution that way the memory overhead can be restricted to enabled
> > > > queues and shared access issues can be restricted to skb's using
> that
> > pool no ?
> >
> > Yes, that is why that I've been arguing so strongly for having the
> flexibility to
> > attach a XDP program per RX queue, as this only change the memory model
> > for this one queue.
> >
> >
> > > Isn't this clearly a must anyway? I may be missing something
> > > fundamental here so please enlighten me :-)
> > >
> > > If we dedicate a page per packet, that could translate to 14M*4K worth
> > > of memory being mapped per second for just a 10G NIC under DoS attack.
> > > How can one protect such as system? Is the assumption that we can
> > > always drop such packets quickly enough before we start dropping
> > > randomly due to memory pressure? If a handshake is required to
> > > determine validity of a packet then that is going to be difficult.
> >
> > Under DoS attacks you don't run out of memory, because a diverse set of
> > socket memory limits/accounting avoids that situation. What does happen
> > is the maximum achievable PPS rate is directly dependent on the
> > time you spend on each packet. This use of CPU resources (and
> > hitting mem-limits-safe-guards) push-back on the drivers speed to
> process
> > the RX ring. In effect, packets are dropped in the NIC HW as RX-ring
> queue
> > is not emptied fast-enough.
> >
> > Given you don't control what HW drops, the attacker will "successfully"
> > cause your good traffic to be among the dropped packets.
> >
> > This is where XDP change the picture. If you can express (by eBPF) a
> filter
> > that can separate "bad" vs "good" traffic, then you can take back
> control.
> > Almost like controlling what traffic the HW should drop.
> > Given the cost of XDP-eBPF filter + serving regular traffic does not use
> all of
> > your CPU resources, you have overcome the attack.
> >
> > --
> Jesper, John et al .. to make this a little concrete I am going to spin
> up a v2 which has only bigbuffers mode enabled for xdp acceleration, all
> other modes will reject the xdp ndo ..
>
> Do we have agreement on that model ?
>
> It will need that all vhost implementations will need to start with
> mergeable buffers disabled to get xdp goodness, but that sounds like a
> safe thing to do for now ..
It's ok for experimentation, but really after speaking with Alexei it's
clear to me that xdp should have a separate code path in the driver,
e.g. the separation between modes is something that does not
make sense for xdp.
The way I imagine it working:
- when XDP is attached disable all LRO using VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
(not used by driver so far, designed to allow dynamic LRO control with
ethtool)
- start adding page-sized buffers
- do something with non-page-sized buffers added previously - what
exactly? copy I guess? What about LRO packets that are too large -
can we drop or can we split them up?
I'm fine with disabling XDP for some configurations as the first step,
and we can add that support later.
Ideas about mergeable buffers (optional):
At the moment mergeable buffers can't be disabled dynamically.
They do bring a small benefit for XDP if host MTU is large (see below)
and aren't hard to support:
- if header is by itself skip 1st page
- otherwise copy all data into first page
and it's nicer not to add random limitations that require guest reboot.
It might make sense to add a command that disables/enabled
mergeable buffers dynamically but that's for newer hosts.
Spec does not require it but in practice most hosts put all data
in the 1st page or all in the 2nd page so the copy will be nop
for these cases.
Large host MTU - newer hosts report the host MTU, older ones don't.
Using mergeable buffers we can at least detect this case
(and then what? drop I guess).
--
MST
Powered by blists - more mailing lists