[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <95ac793b-66aa-b53c-a9bc-c86cbc47ee1a@iogearbox.net>
Date: Thu, 5 Apr 2018 12:37:19 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Jesper Dangaard Brouer <brouer@...hat.com>,
Eric Leblond <eric@...it.org>,
Victor Julien <victor@...iniac.net>,
Peter Manev <petermanev@...il.com>
Cc: oisf-devel@...ts.openinfosecfoundation.org,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Alexei Starovoitov <ast@...com>,
"iovisor-dev@...ts.iovisor.org" <iovisor-dev@...ts.iovisor.org>,
Saeed Mahameed <saeedm@...lanox.com>,
Daniel Borkmann <borkmann@...earbox.net>,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [iovisor-dev] Best userspace programming API for XDP features
query to kernel?
On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:
> Hi Suricata people,
>
> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
> into the issue, that at Suricata load/start time, we cannot determine
> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
>
> We would have liked a way to report that suricata.yaml config was
> invalid for this hardware/setup. Now, it just loads, and packets gets
> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
>
> My question to suricata developers: (Q1) Do you already have code that
> query the kernel or drivers for features?
>
> At the IOvisor call (2 weeks ago), we discussed two options of exposing
> XDP features avail in a given driver.
>
> Option#1: Extend existing ethtool -k/-K "offload and other features"
> with some XDP features, that userspace can query. (Do you already query
> offloads, regarding Q1)
>
> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
I don't really mind if you go via ethtool, as long as we handle this
generically from there and e.g. call the dev's ndo_bpf handler such that
we keep all the information in one place. This can be a new ndo_bpf command
e.g. XDP_QUERY_FEATURES or such.
More specifically, how would such feature mask look like? How fine grained
would this be? When you add a new minor feature to, say, cpumap that not
all drivers support yet, we'd need a new flag each time, no? Same for meta data,
then potentially for the redirect memory return work, or the af_xdp bits, the
xdp_rxq_info would have needed it, etc. What about nfp in terms of XDP
offload capabilities, should they be included as well or is probing to load
the program and see if it loads/JITs as we do today just fine (e.g. you'd
otherwise end up with extra flags on a per BPF helper basis)? To make a
somewhat reliable assertion whether feature xyz would work, this would
explode in new feature bits long term. Additionally, if we end up with a
lot of feature flags, it will be very hard for users to determine whether
this particular set of features a driver supports actually represents a
fully supported native XDP driver.
What about keeping this high level to users? E.g. say you have 2 options
that drivers can expose as netdev_features_strings 'xdp-native-full' or
'xdp-native-partial'. If a driver truly supports all XDP features for a
given kernel e.g. v4.16, then a query like 'ethtool -k foo' will say
'xdp-native-full', if at least one feature is missing to be feature complete
from e.g. above list, then ethtool will tell 'xdp-native-partial', and if
not even ndo_bpf callback exists then no 'xdp-native-*' is reported.
Side-effect might be that it would give incentive to keep drivers in state
'xdp-native-full' instead of being downgraded to 'xdp-native-partial'.
Potentially, in the 'xdp-native-partial' state, we can expose a high-level
list of missing features that the driver does not support yet, which would
over time converge towards 'zero' and thus 'xdp-native-full' again. ethtool
itself could get a new XDP specific query option that, based on this info,
can then dump the full list of supported and not supported features. In order
for this to not explode, such features would need to be kept on a high-level
basis, meaning if e.g. cpumap gets extended along with support for a number
of drivers, then those that missed out would need to be temporarily
re-flagged with e.g. 'cpumap not supported' until it gets also implemented
there. That way, we don't explode in adding too fine-grained feature bit
combinations long term and make it easier to tell whether a driver supports
the full set in native XDP or not. Thoughts?
> (Q2) Do Suricata devs have any preference (or other options/ideas) for
> the way the kernel expose this info to userspace?
>
> [1] http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
Powered by blists - more mailing lists