lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e08f714-6284-6d0d-9cbe-711c64bf97aa@gmail.com>
Date:   Mon, 18 Nov 2019 15:41:00 +0900
From:   Toshiaki Makita <toshiaki.makita1@...il.com>
To:     Toke Høiland-Jørgensen <toke@...hat.com>,
        John Fastabend <john.fastabend@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        Cong Wang <xiyou.wangcong@...il.com>,
        Jiri Pirko <jiri@...nulli.us>,
        Pablo Neira Ayuso <pablo@...filter.org>,
        Jozsef Kadlecsik <kadlec@...filter.org>,
        Florian Westphal <fw@...len.de>,
        Pravin B Shelar <pshelar@....org>
Cc:     netdev@...r.kernel.org, bpf@...r.kernel.org,
        William Tu <u9012063@...il.com>,
        Stanislav Fomichev <sdf@...ichev.me>
Subject: Re: [RFC PATCH v2 bpf-next 00/15] xdp_flow: Flow offload to XDP

On 2019/11/14 21:41, Toke Høiland-Jørgensen wrote:
> Toshiaki Makita <toshiaki.makita1@...il.com> writes:
> 
>> On 2019/11/13 1:53, Toke Høiland-Jørgensen wrote:
>>> Toshiaki Makita <toshiaki.makita1@...il.com> writes:
>>>>
>>>> Hi Toke,
>>>>
>>>> Sorry for the delay.
>>>>
>>>> On 2019/10/31 21:12, Toke Høiland-Jørgensen wrote:
>>>>> Toshiaki Makita <toshiaki.makita1@...il.com> writes:
>>>>>
>>>>>> On 2019/10/28 0:21, Toke Høiland-Jørgensen wrote:
>>>>>>> Toshiaki Makita <toshiaki.makita1@...il.com> writes:
>>>>>>>>> Yeah, you are right that it's something we're thinking about. I'm not
>>>>>>>>> sure we'll actually have the bandwidth to implement a complete solution
>>>>>>>>> ourselves, but we are very much interested in helping others do this,
>>>>>>>>> including smoothing out any rough edges (or adding missing features) in
>>>>>>>>> the core XDP feature set that is needed to achieve this :)
>>>>>>>>
>>>>>>>> I'm very interested in general usability solutions.
>>>>>>>> I'd appreciate if you could join the discussion.
>>>>>>>>
>>>>>>>> Here the basic idea of my approach is to reuse HW-offload infrastructure
>>>>>>>> in kernel.
>>>>>>>> Typical networking features in kernel have offload mechanism (TC flower,
>>>>>>>> nftables, bridge, routing, and so on).
>>>>>>>> In general these are what users want to accelerate, so easy XDP use also
>>>>>>>> should support these features IMO. With this idea, reusing existing
>>>>>>>> HW-offload mechanism is a natural way to me. OVS uses TC to offload
>>>>>>>> flows, then use TC for XDP as well...
>>>>>>>
>>>>>>> I agree that XDP should be able to accelerate existing kernel
>>>>>>> functionality. However, this does not necessarily mean that the kernel
>>>>>>> has to generate an XDP program and install it, like your patch does.
>>>>>>> Rather, what we should be doing is exposing the functionality through
>>>>>>> helpers so XDP can hook into the data structures already present in the
>>>>>>> kernel and make decisions based on what is contained there. We already
>>>>>>> have that for routing; L2 bridging, and some kind of connection
>>>>>>> tracking, are obvious contenders for similar additions.
>>>>>>
>>>>>> Thanks, adding helpers itself should be good, but how does this let users
>>>>>> start using XDP without having them write their own BPF code?
>>>>>
>>>>> It wouldn't in itself. But it would make it possible to write XDP
>>>>> programs that could provide the same functionality; people would then
>>>>> need to run those programs to actually opt-in to this.
>>>>>
>>>>> For some cases this would be a simple "on/off switch", e.g.,
>>>>> "xdp-route-accel --load <dev>", which would install an XDP program that
>>>>> uses the regular kernel routing table (and the same with bridging). We
>>>>> are planning to collect such utilities in the xdp-tools repo - I am
>>>>> currently working on a simple packet filter:
>>>>> https://github.com/xdp-project/xdp-tools/tree/xdp-filter
>>>>
>>>> Let me confirm how this tool adds filter rules.
>>>> Is this adding another commandline tool for firewall?
>>>>
>>>> If so, that is different from my goal.
>>>> Introducing another commandline tool will require people to learn
>>>> more.
>>>>
>>>> My proposal is to reuse kernel interface to minimize such need for
>>>> learning.
>>>
>>> I wasn't proposing that this particular tool should be a replacement for
>>> the kernel packet filter; it's deliberately fairly limited in
>>> functionality. My point was that we could create other such tools for
>>> specific use cases which could be more or less drop-in (similar to how
>>> nftables has a command line tool that is compatible with the iptables
>>> syntax).
>>>
>>> I'm all for exposing more of the existing kernel capabilities to XDP.
>>> However, I think it's the wrong approach to do this by reimplementing
>>> the functionality in eBPF program and replicating the state in maps;
>>> instead, it's better to refactor the existing kernel functionality to it
>>> can be called directly from an eBPF helper function. And then ship a
>>> tool as part of xdp-tools that installs an XDP program to make use of
>>> these helpers to accelerate the functionality.
>>>
>>> Take your example of TC rules: You were proposing a flow like this:
>>>
>>> Userspace TC rule -> kernel rule table -> eBPF map -> generated XDP
>>> program
>>>
>>> Whereas what I mean is that we could do this instead:
>>>
>>> Userspace TC rule -> kernel rule table
>>>
>>> and separately
>>>
>>> XDP program -> bpf helper -> lookup in kernel rule table
>>
>> Thanks, now I see what you mean.
>> You expect an XDP program like this, right?
>>
>> int xdp_tc(struct xdp_md *ctx)
>> {
>> 	int act = bpf_xdp_tc_filter(ctx);
>> 	return act;
>> }
> 
> Yes, basically, except that the XDP program would need to parse the
> packet first, and bpf_xdp_tc_filter() would take a parameter struct with
> the parsed values. See the usage of bpf_fib_lookup() in
> bpf/samples/xdp_fwd_kern.c
> 
>> But doesn't this way lose a chance to reduce/minimize the program to
>> only use necessary features for this device?
> 
> Not necessarily. Since the BPF program does the packet parsing and fills
> in the TC filter lookup data structure, it can limit what features are
> used that way (e.g., if I only want to do IPv6, I just parse the v6
> header, ignore TCP/UDP, and drop everything that's not IPv6). The lookup
> helper could also have a flag argument to disable some of the lookup
> features.

It's unclear to me how to configure that.
Use options when attaching the program? Something like
$ xdp_tc attach eth0 --only-with ipv6
But can users always determine their necessary features in advance?
Frequent manual reconfiguration when TC rules frequently changes does not sound nice.
Or, add hook to kernel to listen any TC filter event on some daemon and automatically
reload the attached program?

Another concern is key size. If we use the TC core then TC will use its hash table with
fixed key size. So we cannot decrease the size of hash table key in this way?

> 
> It would probably require a bit of refactoring in the kernel data
> structures so they can be used without being tied to an skb. David Ahern
> did something similar for the fib. For the routing table case, that
> resulted in a significant speedup: About 2.5x-3x the performance when
> using it via XDP (depending on the number of routes in the table).

I'm curious about how much the helper function can improve the performance compared to
XDP programs which emulates kernel feature without using such helpers.
2.5x-3x sounds a bit slow as XDP to me, but it can be routing specific problem.

Toshiaki Makita

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ