lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 6 Jul 2016 18:19:11 -0700 From: Greg Kroah-Hartman <gregkh@...uxfoundation.org> To: linux-kernel@...r.kernel.org Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, stable@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>, WANG Cong <xiyou.wangcong@...il.com>, Jamal Hadi Salim <jhs@...atatu.com>, "David S. Miller" <davem@...emloft.net> Subject: [PATCH 4.4 01/32] net_sched: fix pfifo_head_drop behavior vs backlog 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eric Dumazet <edumazet@...gle.com> [ Upstream commit 6c0d54f1897d229748d4f41ef919078db6db2123 ] When the qdisc is full, we drop a packet at the head of the queue, queue the current skb and return NET_XMIT_CN Now we track backlog on upper qdiscs, we need to call qdisc_tree_reduce_backlog(), even if the qlen did not change. Fixes: 2ccccf5fb43f ("net_sched: update hierarchical backlog too") Signed-off-by: Eric Dumazet <edumazet@...gle.com> Cc: WANG Cong <xiyou.wangcong@...il.com> Cc: Jamal Hadi Salim <jhs@...atatu.com> Acked-by: Cong Wang <xiyou.wangcong@...il.com> Signed-off-by: David S. Miller <davem@...emloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org> --- net/sched/sch_fifo.c | 4 ++++ 1 file changed, 4 insertions(+) --- a/net/sched/sch_fifo.c +++ b/net/sched/sch_fifo.c @@ -37,14 +37,18 @@ static int pfifo_enqueue(struct sk_buff static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc *sch) { + unsigned int prev_backlog; + if (likely(skb_queue_len(&sch->q) < sch->limit)) return qdisc_enqueue_tail(skb, sch); + prev_backlog = sch->qstats.backlog; /* queue full, remove one skb to fulfill the limit */ __qdisc_queue_drop_head(sch, &sch->q); qdisc_qstats_drop(sch); qdisc_enqueue_tail(skb, sch); + qdisc_tree_reduce_backlog(sch, 0, prev_backlog - sch->qstats.backlog); return NET_XMIT_CN; }
Powered by blists - more mailing lists