lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260122175919.590601b6@kernel.org>
Date: Thu, 22 Jan 2026 17:59:19 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, Eric Dumazet
 <eric.dumazet@...il.com>, "David S. Miller" <davem@...emloft.net>, Paolo
 Abeni <pabeni@...hat.com>, Toke Høiland-Jørgensen
 <toke@...e.dk>, carges@...udflare.com, kernel-team@...udflare.com, Yan Zhai
 <yan@...udflare.com>
Subject: Re: [PATCH net-next v1] net: sched: sfq: add detailed drop reasons
 for monitoring

On Thu, 22 Jan 2026 16:33:00 +0100 Jesper Dangaard Brouer wrote:
> > The kfree_skb tracepoint does not have netdev,  
> 
> Correct, the kfree_skb tracepoint does not have netdev avail.
> I'm also saying that I don't want the netdev, because that would
> explode/increase the Prometheus metric data.

Perhaps cutting someones reply mid-sentence is not the most effective
way to communicate.

> > so if you're saying you have the ability to see the netdev then
> > presumably there's some BPF in the mix BTF'ing the netdev info out.
> > And if you can read the netdev info, you can as well read
> > netdev->qdisc->ops->id and/or netdev->_tx[0]->qdisc->ops->id
> > 
> > It kinda reads to me like you're trying to encode otherwise-reachable
> > metadata into the drop reason.  
> 
> This feels like a misunderstanding. We don't want to (somehow) decode
> extra metadata like netdev or qdisc.  First of all we don't want to
> spend extra BPF prog cycles doing this (if it's even possible), and
> secondly we want to keep metric simple (see cardinality explosion).
> 
> I'm simply saying let us keep Eric's current approach of having qdisc
> specific drop-reason *when* those relates to qdisc specific tunables.
> And I'm arguing that FQ flow_limit and SFQ depth are two different
> things, and should have their own enums.
>    I have a specific production use-case, where I want configure SFQ to
> have small flow "depth" to penalize elefant flows, so I'm okay with
> SKB_DROP_REASON_SFQ_MAXDEPTH events.  So, I want to be able to tell this
> apart from FQ drops (SKB_DROP_REASON_FQ_FLOW_LIMIT). I hope that this
> makes it clear why I want this to be separate enums(?)

It really sounds to me like _in your setup_ there's some special
meaning associated with some traffic being in SFQ and other
traffic in FQ. And you can get the relevant information with BPF,
you're just saying that you don't want to spend cycles (with no
data to back that up).

I'm not gonna lose sleep over this, if you convince another maintainer
to merge it it's fine. To me having per qdisc drop reasons. When qdiscs
are *pluggable modules* and drop reasons are a core thing makes
no sense. You're encoding arbitrary metadata into the drop reason for
the convenience of your narrow use case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ