lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 2 Mar 2007 19:32:41 -0500
From:	"Ritesh Kumar" <ritesh@...unc.edu>
To:	"Stephen Hemminger" <shemminger@...ux-foundation.org>
Cc:	"Patrick McHardy" <kaber@...sh.net>, netdev@...r.kernel.org
Subject: Re: Netem tfifo implementation

On 3/2/07, Stephen Hemminger <shemminger@...ux-foundation.org> wrote:
> On Fri, 2 Mar 2007 15:56:54 -0500
> "Ritesh Kumar" <ritesh@...unc.edu> wrote:
>
> > On 3/2/07, Patrick McHardy <kaber@...sh.net> wrote:
> > > Ritesh Kumar wrote:
> > > > Hi,
> > > >    I recently saw the qdisc "tfifo" in the netem module
> > > > (net/sched/sch_netem.c) when I migrated some of my patches from 2.6.14
> > > > to 2.6.20. As I understand, tfifo helps in keeping the queue of
> > > > packets sorted according to their "time_to_send". [tfifo was not
> > > > present in 2.6.14 perhaps because arrival order of packets was always
> > > > equal to the departure order]. However, tfifo uses a linear search in
> > > > the packet queue to find where to enqueue the packet.
> > > >    Quite some time ago (2.6.14 era), I needed a similar functionality
> > > > from the netem module and I ended up coding a pointer based min-heap
> > > > for the same. I was wondering if the community was interested in using
> > > > the min-heap implementation to replace the linear search
> > > > implementation. I have tested the min-heap quite a few times and it
> > > > seems to work.
> > > >    The implementation is slightly non-trivial because it uses
> > > > pointers to maintain the heap structure instead if using good old
> > > > fixed size arrays. I did this mainly so that the limit of the netem
> > > > qdisc could be changed on the fly. However, because every sk_buff now
> > > > needs two pointers for its children nodes, I added an extra
> > > > (sk_buff*)next2 to struct sk_buff (sorry!). However, this can probably
> > > > be changed to a pointer inside netem_skb_cb.  Also, because I needed
> > > > this for personal work and 2.6.14 didn't contain tfifo, I basically
> > > > removed the embedded qdisc and made netem a classless qdisc with my
> > > > min heap as the native "queue" (sorry again! :) )
> > >
> > > The tfifo qdisc has a limit, why not just allocate a fixed-size heap
> > > based on that?
> > >
> > >
> >
> > The tfifo queue limit itself can be changed and that creates the
> > problem. If we use a fixed heap (say implemented using a fixed size
> > array) then we will have to copy over all pointers from the first
> > array to a reallocated array whenever the queue limit is changed.
> > In retrospect, moving just a few 10s of kilobytes of data doesn't seem
> > that much of a problem... now I feel stupid having put so much effort
> > :).
> >
>
> Tfifo is a special case because:
>   * timestamps are stored in skb->cb so it is only really usable inside
>     netem that adds timestamps.
>   * insertions are cheap because it walks backwards and netem usually has
>     tnext > tlast.   Only if you have a huge jitter which causes massive reordering
>     and that is unrealistic, would you see a problem.
>

You are right. A huge jitter inside a given flow is unrealistic in
real networks. It can also cause artificial reordering. However, in
our lab we use netem (with my changes) to enable per-flow delays. The
per-flow delays that we use vary a lot and hence we have to go through
some optimizations.

Thanks for all the feedback.

Ritesh
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ