lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260121172155.6ec96ef8@kernel.org>
Date: Wed, 21 Jan 2026 17:21:55 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, Eric Dumazet
 <eric.dumazet@...il.com>, "David S. Miller" <davem@...emloft.net>, Paolo
 Abeni <pabeni@...hat.com>, Toke Høiland-Jørgensen
 <toke@...e.dk>, carges@...udflare.com, kernel-team@...udflare.com, Yan Zhai
 <yan@...udflare.com>
Subject: Re: [PATCH net-next v1] net: sched: sfq: add detailed drop reasons
 for monitoring

On Wed, 21 Jan 2026 20:13:31 +0100 Jesper Dangaard Brouer wrote:
> >> I noticed commit 5765c7f6e317 ("net_sched: sch_fq: add three
> >> drop_reason") (Author: Eric Dumazet).
> >>
> >>    SKB_DROP_REASON_FQ_BAND_LIMIT: Per-band packet limit exceeded
> >>    SKB_DROP_REASON_FQ_HORIZON_LIMIT: Packet timestamp too far in future
> >>    SKB_DROP_REASON_FQ_FLOW_LIMIT: Per-flow packet limit exceeded
> >>
> >> Should I/we make SKB_DROP_REASON_QDISC_MAXDEPTH specific for SFQ ?
> >> Like naming it = SKB_DROP_REASON_SFQ_MAXDEPTH ?  
> > 
> > FWIW FLOW_LIMIT is more intuitive to me, but I'm mostly dealing with
> > fq so probably because that's the param name there.
> > 
> > I'd prefer the reuse (just MAXDEPTH, I don't see equivalent of
> > MAXFLOWS?). We assign multiple names the same values in the enum to
> > avoid breaking FQ users.  
> 
> I've taken a detailed look at how we consume this in production and what
> Prometheus metrics that we generate. My conclusion is that, we want to
> keep the Eric's approach of having qdisc specific drop-reason (that
> relates to qdisc tunables), but extend this with a prefix
> "SKB_DROP_REASON_QDISC_xxx". This is what I implemented in [v2] like
> "SKB_DROP_REASON_QDISC_SFQ_MAXFLOWS".  The QDISC_ prefix enables pattern
> matching for qdisc-related drops allowing monitoring tools to match this
> for categorizing new/future drop reasons into same qdisc category.
> 
> For Prometheus drop_reason metrics the decision was to omit the
> net_device label to keep the metric counts manageable and avoid
> exploding the number of time series (lower cardinality, less storage).
> Without qdisc-specific names in the enum, we'd be forced to add back the
> net_device label, which would explode our time series count.
> 
> The FQ flow_limit and SFQ depth parameters have different semantics: FQ
> uses exact flow classification while SFQ uses stochastic hashing. They
> also have different defaults (FQ: 100 packets, SFQ: 127 packets). Having
> the same drop_reason for both would obscure which qdisc's tunable needs
> adjustment when analyzing production drops.
> 
> Having more enum numbers for drop reasons is essentially free, while
> keeping metrics per net_device is expensive in terms of storage and
> query performance.

I'm not familiar with Prometheus is there some off-the-shelf plugin
which exports the drops? The kfree_skb tracepoint does not have netdev,
so if you're saying you have the ability to see the netdev then
presumably there's some BPF in the mix BTF'ing the netdev info out.
And if you can read the netdev info, you can as well read 
netdev->qdisc->ops->id and/or netdev->_tx[0]->qdisc->ops->id

It kinda reads to me like you're trying to encode otherwise-reachable
metadata into the drop reason.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ