[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190708155158.3f75b57c@cakuba.netronome.com>
Date: Mon, 8 Jul 2019 15:51:58 -0700
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: Ido Schimmel <idosch@...sch.org>
Cc: David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
jiri@...lanox.com, mlxsw@...lanox.com, dsahern@...il.com,
roopa@...ulusnetworks.com, nikolay@...ulusnetworks.com,
andy@...yhouse.net, pablo@...filter.org,
pieter.jansenvanvuuren@...ronome.com, andrew@...n.ch,
f.fainelli@...il.com, vivien.didelot@...il.com,
idosch@...lanox.com,
Alexei Starovoitov <alexei.starovoitov@...il.com>
Subject: Re: [PATCH net-next 00/11] Add drop monitor for offloaded data
paths
On Mon, 8 Jul 2019 16:19:08 +0300, Ido Schimmel wrote:
> On Sun, Jul 07, 2019 at 12:45:41PM -0700, David Miller wrote:
> > From: Ido Schimmel <idosch@...sch.org>
> > Date: Sun, 7 Jul 2019 10:58:17 +0300
> >
> > > Users have several ways to debug the kernel and understand why a packet
> > > was dropped. For example, using "drop monitor" and "perf". Both
> > > utilities trace kfree_skb(), which is the function called when a packet
> > > is freed as part of a failure. The information provided by these tools
> > > is invaluable when trying to understand the cause of a packet loss.
> > >
> > > In recent years, large portions of the kernel data path were offloaded
> > > to capable devices. Today, it is possible to perform L2 and L3
> > > forwarding in hardware, as well as tunneling (IP-in-IP and VXLAN).
> > > Different TC classifiers and actions are also offloaded to capable
> > > devices, at both ingress and egress.
> > >
> > > However, when the data path is offloaded it is not possible to achieve
> > > the same level of introspection as tools such "perf" and "drop monitor"
> > > become irrelevant.
> > >
> > > This patchset aims to solve this by allowing users to monitor packets
> > > that the underlying device decided to drop along with relevant metadata
> > > such as the drop reason and ingress port.
> >
> > We are now going to have 5 or so ways to capture packets passing through
> > the system, this is nonsense.
> >
> > AF_PACKET, kfree_skb drop monitor, perf, XDP perf events, and now this
> > devlink thing.
> >
> > This is insanity, too many ways to do the same thing and therefore the
> > worst possible user experience.
> >
> > Pick _ONE_ method to trap packets and forward normal kfree_skb events,
> > XDP perf events, and these taps there too.
> >
> > I mean really, think about it from the average user's perspective. To
> > see all drops/pkts I have to attach a kfree_skb tracepoint, and not just
> > listen on devlink but configure a special tap thing beforehand and then
> > if someone is using XDP I gotta setup another perf event buffer capture
> > thing too.
>
> Let me try to explain again because I probably wasn't clear enough. The
> devlink-trap mechanism is not doing the same thing as other solutions.
>
> The packets we are capturing in this patchset are packets that the
> kernel (the CPU) never saw up until now - they were silently dropped by
> the underlying device performing the packet forwarding instead of the
> CPU.
When you say silently dropped do you mean that mlxsw as of today
doesn't have any counters exposed for those events?
If we wanted to consolidate this into something existing we can either
(a) add similar traps in the kernel data path;
(b) make these traps extension of statistics.
My knee jerk reaction to seeing the patches was that it adds a new
place where device statistics are reported. Users who want to know why
things are dropped will not get detailed breakdown from ethtool -S which
for better or worse is the one stop shop for device stats today.
Having thought about it some more, however, I think that having a
forwarding "exception" object and hanging statistics off of it is a
better design, even if we need to deal with some duplication to get
there.
IOW having an way to "trap all packets which would increment a
statistic" (option (b) above) is probably a bad design.
As for (a) I wonder how many of those events have a corresponding event
in the kernel stack? If we could add corresponding trace points and
just feed those from the device driver, that'd obviously be a holy
grail. Not to mention that requiring trace points to be added to the
core would make Alexei happy:
http://vger.kernel.org/netconf2019_files/netconf2019_slides_ast.pdf#page=3
;)
That's my $.02, not very insightful.
> For each such packet we get valuable metadata from the underlying device
> such as the drop reason and the ingress port. With time, even more
> reasons and metadata could be provided (e.g., egress port, traffic
> class). Netlink provides a structured and extensible way to report the
> packet along with the metadata to interested users. The tc-sample action
> uses a similar concept.
>
> I would like to emphasize that these dropped packets are not injected to
> the kernel's receive path and therefore not subject to kfree_skb() and
> related infrastructure. There is no need to waste CPU cycles on packets
> we already know were dropped (and why). Further, hardware tail/early
> drops will not be dropped by the kernel, given its qdiscs are probably
> empty.
>
> Regarding the use of devlink, current ASICs can forward packets at
> 6.4Tb/s. We do not want to overwhelm the CPU with dropped packets and
> therefore we give users the ability to control - via devlink - the
> trapping of certain packets to the CPU and their reporting to user
> space. In the future, devlink-trap can be extended to support the
> configuration of the hardware policers of each trap.
Powered by blists - more mailing lists