lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 30 Aug 2009 13:52:09 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [RFC PATCH] sched: Fix resource limiting in pfifo_fast

I had thought of this reason before submitting. But I felt that if we are
filling up the qdisc due to some problem at driver/device, then the issue
should be handled at a different level (increase tx_queue_len, let
packets drop and TCP handle it, etc).

So I am not sure if it is intentionally designed this way, or whether it
needs fixing to reflect a maximum limit.

Thanks,

- KK

> Eric Dumazet <eric.dumazet@...il.com>
> Re: [RFC PATCH] sched: Fix resource limiting in pfifo_fast
>
> Krishna Kumar a écrit :
> > From: Krishna Kumar <krkumar2@...ibm.com>
> >
> > pfifo_fast_enqueue has this check:
> >         if (skb_queue_len(list) < qdisc_dev(qdisc)->tx_queue_len) {
> >
> > which allows each band to enqueue upto tx_queue_len skbs for a
> > total of 3*tx_queue_len skbs. I am not sure if this was the
> > intention of limiting in qdisc.
>
> Yes I noticed that and said to myself :
> This was to let high priority packets have their own limit,
> independent on fact that low priority packets filled their queue.
>
> >
> > Patch compiled and 32 simultaneous netperf testing ran fine. Also:
> > # tc -s qdisc show dev eth2
> > qdisc pfifo_fast 0: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1
1
> >  Sent 16835026752 bytes 373116 pkt (dropped 0, overlimits 0 requeues
25)
> >  rate 0bit 0pps backlog 0b 0p requeues 25
> >
> > (I am taking next week off, so sorry for any delay in responding)
> >
> > Signed-off-by: Krishna Kumar <krkumar2@...ibm.com>
> > ---
> >
> >  net/sched/sch_generic.c |    8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff -ruNp org/net/sched/sch_generic.c new/net/sched/sch_generic.c
> > --- org/net/sched/sch_generic.c   2009-08-30 11:18:23.000000000 +0530
> > +++ new/net/sched/sch_generic.c   2009-08-30 11:21:50.000000000 +0530
> > @@ -432,11 +432,11 @@ static inline struct sk_buff_head *band2
> >
> >  static int pfifo_fast_enqueue(struct sk_buff *skb, struct Qdisc*
qdisc)
> >  {
> > -   int band = prio2band[skb->priority & TC_PRIO_MAX];
> > -   struct pfifo_fast_priv *priv = qdisc_priv(qdisc);
> > -   struct sk_buff_head *list = band2list(priv, band);
> > +   if (skb_queue_len(&qdisc->q) < qdisc_dev(qdisc)->tx_queue_len) {
> > +      int band = prio2band[skb->priority & TC_PRIO_MAX];
> > +      struct pfifo_fast_priv *priv = qdisc_priv(qdisc);
> > +      struct sk_buff_head *list = band2list(priv, band);
> >
> > -   if (skb_queue_len(list) < qdisc_dev(qdisc)->tx_queue_len) {
> >        priv->bitmap |= (1 << band);
> >        qdisc->q.qlen++;
> >        return __qdisc_enqueue_tail(skb, qdisc, list);
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ