lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <28911dac-6aec-42ce-9101-9071e18e1522@kernel.org>
Date: Thu, 22 Jan 2026 16:33:00 +0100
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
 Eric Dumazet <eric.dumazet@...il.com>, "David S. Miller"
 <davem@...emloft.net>, Paolo Abeni <pabeni@...hat.com>,
 Toke Høiland-Jørgensen <toke@...e.dk>,
 carges@...udflare.com, kernel-team@...udflare.com,
 Yan Zhai <yan@...udflare.com>
Subject: Re: [PATCH net-next v1] net: sched: sfq: add detailed drop reasons
 for monitoring




On 22/01/2026 02.21, Jakub Kicinski wrote:
> On Wed, 21 Jan 2026 20:13:31 +0100 Jesper Dangaard Brouer wrote:
>>>> I noticed commit 5765c7f6e317 ("net_sched: sch_fq: add three
>>>> drop_reason") (Author: Eric Dumazet).
>>>>
>>>>     SKB_DROP_REASON_FQ_BAND_LIMIT: Per-band packet limit exceeded
>>>>     SKB_DROP_REASON_FQ_HORIZON_LIMIT: Packet timestamp too far in future
>>>>     SKB_DROP_REASON_FQ_FLOW_LIMIT: Per-flow packet limit exceeded
>>>>
>>>> Should I/we make SKB_DROP_REASON_QDISC_MAXDEPTH specific for SFQ ?
>>>> Like naming it = SKB_DROP_REASON_SFQ_MAXDEPTH ?
>>>
>>> FWIW FLOW_LIMIT is more intuitive to me, but I'm mostly dealing with
>>> fq so probably because that's the param name there.
>>>
>>> I'd prefer the reuse (just MAXDEPTH, I don't see equivalent of
>>> MAXFLOWS?). We assign multiple names the same values in the enum to
>>> avoid breaking FQ users.
>>
>> I've taken a detailed look at how we consume this in production and what
>> Prometheus metrics that we generate. My conclusion is that, we want to
>> keep the Eric's approach of having qdisc specific drop-reason (that
>> relates to qdisc tunables), but extend this with a prefix
>> "SKB_DROP_REASON_QDISC_xxx". This is what I implemented in [v2] like
>> "SKB_DROP_REASON_QDISC_SFQ_MAXFLOWS".  The QDISC_ prefix enables pattern
>> matching for qdisc-related drops allowing monitoring tools to match this
>> for categorizing new/future drop reasons into same qdisc category.
>>
>> For Prometheus drop_reason metrics the decision was to omit the
>> net_device label to keep the metric counts manageable and avoid
>> exploding the number of time series (lower cardinality, less storage).
>> Without qdisc-specific names in the enum, we'd be forced to add back the
>> net_device label, which would explode our time series count.
>>
>> The FQ flow_limit and SFQ depth parameters have different semantics: FQ
>> uses exact flow classification while SFQ uses stochastic hashing. They
>> also have different defaults (FQ: 100 packets, SFQ: 127 packets). Having
>> the same drop_reason for both would obscure which qdisc's tunable needs
>> adjustment when analyzing production drops.
>>
>> Having more enum numbers for drop reasons is essentially free, while
>> keeping metrics per net_device is expensive in terms of storage and
>> query performance.
> 
> I'm not familiar with Prometheus is there some off-the-shelf plugin
> which exports the drops? 

(I assume you are familiar with Prometheus cardinality explosion problem?)
We have written our own drop-reason monitor that exports drop_reason
enums directly to Prometheus (without decoding netdev or qdisc
information). (Irrelevant: we also collect sampled (e.g. 1/4000) copies
of packet data to userspace where we collect as much metadata as
possible, but that is too expensive to do for every packet).

> The kfree_skb tracepoint does not have netdev,

Correct, the kfree_skb tracepoint does not have netdev avail.
I'm also saying that I don't want the netdev, because that would
explode/increase the Prometheus metric data.

> so if you're saying you have the ability to see the netdev then
> presumably there's some BPF in the mix BTF'ing the netdev info out.
> And if you can read the netdev info, you can as well read
> netdev->qdisc->ops->id and/or netdev->_tx[0]->qdisc->ops->id
> 
> It kinda reads to me like you're trying to encode otherwise-reachable
> metadata into the drop reason.

This feels like a misunderstanding. We don't want to (somehow) decode
extra metadata like netdev or qdisc.  First of all we don't want to
spend extra BPF prog cycles doing this (if it's even possible), and
secondly we want to keep metric simple (see cardinality explosion).

I'm simply saying let us keep Eric's current approach of having qdisc
specific drop-reason *when* those relates to qdisc specific tunables.
And I'm arguing that FQ flow_limit and SFQ depth are two different
things, and should have their own enums.
   I have a specific production use-case, where I want configure SFQ to
have small flow "depth" to penalize elefant flows, so I'm okay with
SKB_DROP_REASON_SFQ_MAXDEPTH events.  So, I want to be able to tell this
apart from FQ drops (SKB_DROP_REASON_FQ_FLOW_LIMIT). I hope that this
makes it clear why I want this to be separate enums(?)

--Jesper


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ