lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 May 2020 18:19:10 +0300
From:   Vladimir Oltean <>
To:     Petr Machata <>
Cc:     netdev <>, Jakub Kicinski <>,
        Eric Dumazet <>,
        Jamal Hadi Salim <>,
        Jiri Pirko <>,
        Ido Schimmel <>
Subject: Re: [RFC PATCH net-next 0/3] TC: Introduce qevents

Hi Petr,

On Tue, 26 May 2020 at 20:11, Petr Machata <> wrote:
> The Spectrum hardware allows execution of one of several actions as a
> result of queue management events: tail-dropping, early-dropping, marking a
> packet, or passing a configured latency threshold or buffer size. Such
> packets can be mirrored, trapped, or sampled.
> Modeling the action to be taken as simply a TC action is very attractive,
> but it is not obvious where to put these actions. At least with ECN marking
> one could imagine a tree of qdiscs and classifiers that effectively
> accomplishes this task, albeit in an impractically complex manner. But
> there is just no way to match on dropped-ness of a packet, let alone
> dropped-ness due to a particular reason.
> To allow configuring user-defined actions as a result of inner workings of
> a qdisc, this patch set introduces a concept of qevents. Those are attach
> points for TC blocks, where filters can be put that are executed as the
> packet hits well-defined points in the qdisc algorithms. The attached
> blocks can be shared, in a manner similar to clsact ingress and egress
> blocks, arbitrary classifiers with arbitrary actions can be put on them,
> etc.
> For example:
> # tc qdisc add dev eth0 root handle 1: \
>         red limit 500K avpkt 1K qevent early block 10
> # tc filter add block 10 \
>         matchall action mirred egress mirror dev eth1
> Patch #1 of this set introduces several helpers to allow easy and uniform
> addition of qevents to qdiscs. The following two patches, #2 and #3, then
> add two qevents to the RED qdisc: "early" qevent fires when a packet is
> early-dropped; "mark" qevent, when it is ECN-marked.
> This patch set does not deal with offloading. The idea there is that a
> driver will be able to figure out that a given block is used in qevent
> context by looking at binder type. A future patch-set will add a qdisc
> pointer to struct flow_block_offload, which a driver will be able to
> consult to glean the TC or other relevant attributes.
> Petr Machata (3):
>   net: sched: Introduce helpers for qevent blocks
>   net: sched: sch_red: Split init and change callbacks
>   net: sched: sch_red: Add qevents "early" and "mark"
>  include/net/flow_offload.h     |   2 +
>  include/net/pkt_cls.h          |  48 +++++++++++++++
>  include/uapi/linux/pkt_sched.h |   2 +
>  net/sched/cls_api.c            | 107 +++++++++++++++++++++++++++++++++
>  net/sched/sch_red.c            | 100 ++++++++++++++++++++++++++----
>  5 files changed, 247 insertions(+), 12 deletions(-)
> --
> 2.20.1

I only took a cursory glance at your patches. Can these "qevents" be
added to code outside of the packet scheduler, like to the bridge, for
example? Or can the bridge mark the packets somehow, and then any
generic qdisc be able to recognize this mark without specific code?
A very common use case which is currently not possible to implement is
to rate-limit flooded (broadcast, unknown unicast, unknown multicast)
traffic. Can your "qevents" be used to describe this, or must it be
described separately?


Powered by blists - more mailing lists