lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Jan 2021 23:51:53 +0100
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Ciara Loftus <ciara.loftus@...el.com>, netdev@...r.kernel.org,
        bpf@...r.kernel.org, magnus.karlsson@...el.com, bjorn@...nel.org,
        weqaar.a.janjua@...el.com
Subject: Re: [PATCH bpf-next v2 1/6] xsk: add tracepoints for packet drops

On 1/26/21 8:52 AM, Ciara Loftus wrote:
> This commit introduces static perf tracepoints for AF_XDP which
> report information about packet drops, such as the netdev, qid and
> high level reason for the drop. The tracepoint can be
> enabled/disabled by toggling
> /sys/kernel/debug/tracing/events/xsk/xsk_packet_drop/enable

Could you add a rationale to the commit log on why xsk diag stats dump
is insufficient here given you add tracepoints to most locations where
we also bump the counters already? And given diag infra also exposes the
ifindex, queue id, etc, why troubleshooting the xsk socket via ss tool
is not sufficient?

[...]
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 4faabd1ecfd1..9b850716630b 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -11,6 +11,7 @@
>   
>   #define pr_fmt(fmt) "AF_XDP: %s: " fmt, __func__
>   
> +#include <linux/bpf_trace.h>
>   #include <linux/if_xdp.h>
>   #include <linux/init.h>
>   #include <linux/sched/mm.h>
> @@ -158,6 +159,7 @@ static int __xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
>   	addr = xp_get_handle(xskb);
>   	err = xskq_prod_reserve_desc(xs->rx, addr, len);
>   	if (err) {
> +		trace_xsk_packet_drop(xs->dev->name, xs->queue_id, XSK_TRACE_DROP_RXQ_FULL);
>   		xs->rx_queue_full++;
>   		return err;

I presume if this will be triggered under stress you'll likely also spam
your trace event log w/ potentially mio of msgs per sec?

>   	}
> @@ -192,6 +194,7 @@ static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)
>   
>   	len = xdp->data_end - xdp->data;
>   	if (len > xsk_pool_get_rx_frame_size(xs->pool)) {
> +		trace_xsk_packet_drop(xs->dev->name, xs->queue_id, XSK_TRACE_DROP_PKT_TOO_BIG);
>   		xs->rx_dropped++;
>   		return -ENOSPC;
>   	}
> @@ -516,6 +519,8 @@ static int xsk_generic_xmit(struct sock *sk)
>   		if (err == NET_XMIT_DROP) {
>   			/* SKB completed but not sent */
>   			err = -EBUSY;
> +			trace_xsk_packet_drop(xs->dev->name, xs->queue_id,
> +					      XSK_TRACE_DROP_DRV_ERR_TX);

Is there a reason to not bump error counter here?

>   			goto out;
>   		}
>   
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index 8de01aaac4a0..d3c1ca83c75d 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -1,5 +1,6 @@
>   // SPDX-License-Identifier: GPL-2.0
>   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ