[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fde1319f-6669-44b1-b525-44e2bc48f9a1@mojatatu.com>
Date: Mon, 14 Apr 2025 14:17:16 -0300
From: Victor Nogueira <victor@...atatu.com>
To: "Tai, Gerrard" <gerrard.tai@...rlabs.sg>, netdev@...r.kernel.org
Cc: Willy Tarreau <w@....eu>, Stephen Hemminger <stephen@...workplumber.org>,
Jamal Hadi Salim <jhs@...atatu.com>, Cong Wang <xiyou.wangcong@...il.com>,
jiri@...nulli.us, Pedro Tammela <pctammela@...atatu.com>
Subject: Re: [BUG] net/sched: netem: UAF due to duplication routine
On 4/14/25 02:33, Tai, Gerrard wrote:
> Hi,
>
> I found a bug in the netem qdisc's packet duplication logic. This can
> lead to UAF in classful parents.
> [...]
> Unfortunately, I don't have any great ideas regarding a patch.
Hi,
I was thinking about this and thought maybe we can try and store
the initialisation state as a variable in the class structure
so that we can detect this recursive use case. I created the
diff below and tested it with your repro. It seems to have
solved it, but I might be missing something.
net/sched/sch_hfsc.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
index b368ac0595d5..9662df5dd77e 100644
--- a/net/sched/sch_hfsc.c
+++ b/net/sched/sch_hfsc.c
@@ -160,6 +160,7 @@ struct hfsc_class {
struct runtime_sc cl_ulimit; /* upperlimit curve */
u8 cl_flags; /* which curves are valid */
+ u8 cl_initialised; /* Is cl initialised? */
u32 cl_vtperiod; /* vt period sequence number */
u32 cl_parentperiod;/* parent's vt period sequence
number*/
u32 cl_nactive; /* number of active children */
@@ -1548,8 +1549,8 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc
*sch, struct sk_buff **to_free)
{
unsigned int len = qdisc_pkt_len(skb);
struct hfsc_class *cl;
+ bool is_empty;
int err;
- bool first;
cl = hfsc_classify(skb, sch, &err);
if (cl == NULL) {
@@ -1559,7 +1560,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc
*sch, struct sk_buff **to_free)
return err;
}
- first = !cl->qdisc->q.qlen;
+ is_empty = !cl->qdisc->q.qlen;
err = qdisc_enqueue(skb, cl->qdisc, to_free);
if (unlikely(err != NET_XMIT_SUCCESS)) {
if (net_xmit_drop_count(err)) {
@@ -1569,7 +1570,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc
*sch, struct sk_buff **to_free)
return err;
}
- if (first) {
+ if (is_empty && !cl->cl_initialised) {
if (cl->cl_flags & HFSC_RSC)
init_ed(cl, len);
if (cl->cl_flags & HFSC_FSC)
@@ -1582,6 +1583,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc
*sch, struct sk_buff **to_free)
if (cl->cl_flags & HFSC_RSC)
cl->qdisc->ops->peek(cl->qdisc);
+ cl->cl_initialised = 1;
}
sch->qstats.backlog += len;
cheers,
Victor
Powered by blists - more mailing lists