lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CANn89iKiJ91D7fELw9iKB4yCLaDj-WEv27wRS4PtLqM7oK8m=w@mail.gmail.com> Date: Wed, 31 Aug 2022 10:08:42 -0700 From: Eric Dumazet <edumazet@...gle.com> To: Toke Høiland-Jørgensen <toke@...e.dk> Cc: Jamal Hadi Salim <jhs@...atatu.com>, Cong Wang <xiyou.wangcong@...il.com>, Jiri Pirko <jiri@...nulli.us>, "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, cake@...ts.bufferbloat.net, netdev <netdev@...r.kernel.org> Subject: Re: [PATCH net] sch_cake: Return __NET_XMIT_STOLEN when consuming enqueued skb On Wed, Aug 31, 2022 at 2:25 AM Toke Høiland-Jørgensen <toke@...e.dk> wrote: > > When the GSO splitting feature of sch_cake is enabled, GSO superpackets > will be broken up and the resulting segments enqueued in place of the > original skb. In this case, CAKE calls consume_skb() on the original skb, > but still returns NET_XMIT_SUCCESS. This can confuse parent qdiscs into > assuming the original skb still exists, when it really has been freed. Fix > this by adding the __NET_XMIT_STOLEN flag to the return value in this case. > I think you forgot to give credits to the team who discovered this issue. Something like this Reported-by: zdi-disclosures@...ndmicro.com # ZDI-CAN-18231 > Fixes: 0c850344d388 ("sch_cake: Conditionally split GSO segments") > Signed-off-by: Toke Høiland-Jørgensen <toke@...e.dk> > --- > net/sched/sch_cake.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c > index a43a58a73d09..a04928082e4a 100644 > --- a/net/sched/sch_cake.c > +++ b/net/sched/sch_cake.c > @@ -1713,6 +1713,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, > } > idx--; > flow = &b->flows[idx]; > + ret = NET_XMIT_SUCCESS; > > /* ensure shaper state isn't stale */ > if (!b->tin_backlog) { > @@ -1771,6 +1772,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, > > qdisc_tree_reduce_backlog(sch, 1-numsegs, len-slen); > consume_skb(skb); > + ret |= __NET_XMIT_STOLEN; > } else { > /* not splitting */ > cobalt_set_enqueue_time(skb, now); > @@ -1904,7 +1906,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, > } > b->drop_overlimit += dropped; > } > - return NET_XMIT_SUCCESS; > + return ret; > } > > static struct sk_buff *cake_dequeue_one(struct Qdisc *sch) > -- > 2.37.2 >
Powered by blists - more mailing lists