[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5b7c5a08-4225-9f20-2a2c-57767a36a967@gmail.com>
Date: Wed, 14 Mar 2018 20:10:30 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Eric Dumazet <edumazet@...gle.com>,
"David S . Miller" <davem@...emloft.net>
Cc: netdev <netdev@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Jamal Hadi Salim <jhs@...atatu.com>
Subject: Re: [PATCH v2 net] net: sched: fix uses after free
On 03/14/2018 06:53 PM, Eric Dumazet wrote:
> syzbot reported one use-after-free in pfifo_fast_enqueue() [1]
>
> Issue here is that we can not reuse skb after a successful skb_array_produce()
> since another cpu might have consumed it already.
>
> I believe a similar problem exists in try_bulk_dequeue_skb_slow()
> in case we put an skb into qdisc_enqueue_skb_bad_txq() for lockless qdisc.
>
[...]
> Fixes: c5ad119fb6c0 ("net: sched: pfifo_fast use skb_array")
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Reported-by: syzbot+ed43b6903ab968b16f54@...kaller.appspotmail.com
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: Jamal Hadi Salim <jhs@...atatu.com>
> Cc: Cong Wang <xiyou.wangcong@...il.com>
> Cc: Jiri Pirko <jiri@...nulli.us>
> ---
> net/sched/sch_generic.c | 22 +++++++++++++---------
> 1 file changed, 13 insertions(+), 9 deletions(-)
>
Thanks!
Acked-by: John Fastabend <john.fastabend@...il.com>
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index 190570f21b208d5a17943360a3a6f85e1c2a2187..7e3fbe9cc936be376b66a5b12bf8957c3b601f2c 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -106,6 +106,14 @@ static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q,
>
> __skb_queue_tail(&q->skb_bad_txq, skb);
>
> + if (qdisc_is_percpu_stats(q)) {
> + qdisc_qstats_cpu_backlog_inc(q, skb);
So I guess the skb access above needs to be removed as
well per your comment in the commit description. But that
can be another patch.
> + qdisc_qstats_cpu_qlen_inc(q);
> + } else {
> + qdisc_qstats_backlog_inc(q, skb);
> + q->q.qlen++;
> + }
> +
> if (lock)
> spin_unlock(lock);
> }
Powered by blists - more mailing lists