lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Oct 2008 08:22:35 -0700
From:	Stephen Hemminger <shemminger@...tta.com>
To:	Jarek Poplawski <jarkao2@...il.com>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH 04/14] sch_netem: Use requeue list instead of
 ops->requeue()

On Tue, 14 Oct 2008 09:53:49 +0000
Jarek Poplawski <jarkao2@...il.com> wrote:

> -------- Original Message --------
> Subject: [PATCH 5/9]: sch_netem: Use requeue list instead of ops->requeue()
> Date: Mon, 18 Aug 2008 01:37:02 -0700 (PDT)
> From: David Miller <davem@...emloft.net>
> 
> --------------->
> From: David Miller <davem@...emloft.net>
> sch_netem: Use requeue list instead of ops->requeue()
> 
> This code just wants to make this packet the "front" one, and that's
> just as simply done by queueing to the ->requeue list.
> 
> Signed-off-by: Jarek Poplawski <jarkao2@...il.com>
> ---
>  net/sched/sch_netem.c |   11 +++--------
>  1 files changed, 3 insertions(+), 8 deletions(-)
> 
> diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
> index cc4d057..5ca92d9 100644
> --- a/net/sched/sch_netem.c
> +++ b/net/sched/sch_netem.c
> @@ -233,7 +233,8 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
>  		 */
>  		cb->time_to_send = psched_get_time();
>  		q->counter = 0;
> -		ret = q->qdisc->ops->requeue(skb, q->qdisc);
> +		__skb_queue_tail(&q->qdisc->requeue, skb);
> +		ret = NET_XMIT_SUCCESS;
>  	}
>  
>  	if (likely(ret == NET_XMIT_SUCCESS)) {
> @@ -295,13 +296,7 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
>  			return skb;
>  		}
>  
> -		if (unlikely(q->qdisc->ops->requeue(skb, q->qdisc) != NET_XMIT_SUCCESS)) {
> -			qdisc_tree_decrease_qlen(q->qdisc, 1);
> -			sch->qstats.drops++;
> -			printk(KERN_ERR "netem: %s could not requeue\n",
> -			       q->qdisc->ops->id);
> -		}
> -
> +		__skb_queue_tail(&q->qdisc->requeue, skb);
>  		qdisc_watchdog_schedule(&q->watchdog, cb->time_to_send);
>  	}
>  

This won't work for the case where time based reordering changes the packet
sent.  The current code works like this:

    Packet marked to be sent at some time (+101ms)
    new packet is queued and the random delay computes smaller delta (+87ms)
    new packet will go out in first.

This was done for compatibility with NISTnet, so research that wanted to reproduce
NISTnet results could use netem.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ