lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 Mar 2012 04:52:01 +0000
From:	Dave Taht <dave.taht@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	jdb@...x.dk, David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH] sch_sfq: revert dont put new flow at the end of flows

On Wed, Mar 14, 2012 at 4:04 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> This reverts commit d47a0ac7b6 (sch_sfq: dont put new flow at the end of
> flows)
>
> As Jesper found out, patch sounded great but has bad side effects.

Well under most circumstances it IS great.

As the depth of the sfq queue increases it gets increasingly hard to
trigger the problem. I've been using values in the 200-300 range, and
in combination with red, haven't seen it happen.

Also in part the sfq behavior observed is due to an interaction with
htb's 'capture' of a packet rather than peek. The situation that
jesper encountered this issue (trying to regulate flows to hundreds of
downstream clients on a very deterministic test) was different from
where I don't encounter it (trying to regulate flows up from a very
few clients)

For more details:

http://www.bufferbloat.net/issues/332

But I concur in reverting this for now. Sadly. wouldn't mind if there
was a way to keep it as is as an option...

>
> In stress situation, pushing new flows in front of the queue can prevent
> old flows doing any progress. Packets can stay in SFQ queue for
> unlimited amount of time.
>
> It's possible to add heuristics to limit this problem, but this would
> add complexity outside of SFQ scope.
>
> A more sensible answer to Dave Taht concerns (who reported the issued I
> tried to solve in original commit) is probably to use a qdisc hierarchy
> so that high prio packets dont enter a potentially crowded SFQ qdisc.

Um, er, in today's port 80/433 world, there isn't any such thing as high
prio packets.

I am curious as to whether this problem can be made to happen with qfq.

> Reported-by: Jesper Dangaard Brouer <jdb@...x.dk>
> Cc: Dave Taht <dave.taht@...il.com>
> Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
> ---
> I tried various tricks to avoid a revert but in the end it sounds better
> to use a single queue.

I had some hope for a semi-random alternating queue

>
>  net/sched/sch_sfq.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
> index 60d4718..02a21ab 100644
> --- a/net/sched/sch_sfq.c
> +++ b/net/sched/sch_sfq.c
> @@ -469,11 +469,15 @@ enqueue:
>        if (slot->qlen == 1) {          /* The flow is new */
>                if (q->tail == NULL) {  /* It is the first flow */
>                        slot->next = x;
> -                       q->tail = slot;
>                } else {
>                        slot->next = q->tail->next;
>                        q->tail->next = x;
>                }
> +               /* We put this flow at the end of our flow list.
> +                * This might sound unfair for a new flow to wait after old ones,
> +                * but we could endup servicing new flows only, and freeze old ones.
> +                */
> +               q->tail = slot;
>                /* We could use a bigger initial quantum for new flows */
>                slot->allot = q->scaled_quantum;
>        }
>
>



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ