[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALx6S35LmUtwRA85Kg6s7OR=e5Pj9ssqmWLsjtmXfZTXeG02zQ@mail.gmail.com>
Date: Tue, 20 Sep 2016 16:59:53 -0700
From: Tom Herbert <tom@...bertland.com>
To: Thomas Graf <tgraf@...g.ch>
Cc: "David S. Miller" <davem@...emloft.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Kernel Team <kernel-team@...com>,
Tariq Toukan <tariqt@...lanox.com>,
Brenden Blanco <bblanco@...mgrid.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [PATCH RFC 1/3] xdp: Infrastructure to generalize XDP
On Tue, Sep 20, 2016 at 4:43 PM, Thomas Graf <tgraf@...g.ch> wrote:
> On 09/20/16 at 04:18pm, Tom Herbert wrote:
>> This allows other use cases than BPF inserting code into the data
>> path. This gives XDP potential more utility and more users so that we
>> can motivate more driver implementations. For instance, I thinks it's
>> totally reasonable if the nftables guys want to insert some of their
>> rules to perform early DDOS drop to get the same performance that we
>> see in XDP.
>
> Reasonable point with nftables but are any of these users on the table
> already and ready to consume non-skbs? It would be a pity to add this
> complexity and cost if it is never used.
>
Well, need to measure to ascertain the cost. As for complexity, this
actually reduces complexity needed for XDP in the drivers which is a
good thing because that's where most of the support and development
pain will be.
I am looking at using this for ILA router. The problem I am hitting is
that not all packets that we need to translate go through the XDP
path. Some would go through the kernel path, some through XDP path but
that would mean I need parallel lookup tables to be maintained for the
two paths which won't scale. ILA translation is so trivial and not
really something that we need to be user programmable, the fast path
is really for accelerating an existing kernel capability. If I can
reuse the kernel code already written and the existing kernel data
structures to make a fast path in XDP there is a lot of value in that
for me.
> I don't see how we can ensure performance if we have multiple
> subsystems register for the hook each adding their own parsers which
> need to be passed through sequentially. Maybe I'm missing something.
We can optimize for allowing only one hook, or maybe limit to only
allowing one hook to be set. In any case this obviously requires a lot
of performance evaluation, I am hoping to feedback on the design
first. My question about using a linear list for this was real, do you
know a better method off hand to implement a call list?
Thanks,
Tom
Powered by blists - more mailing lists