[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9ab07532-3d46-4e4a-8baf-5863b0cec5db@jasiiieee>
Date: Sun, 11 Dec 2011 19:42:48 -0500 (EST)
From: "John A. Sullivan III" <jsullivan@...nsourcedevel.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev@...r.kernel.org
Subject: Re: IFB and bridges
----- Original Message -----
> From: "Eric Dumazet" <eric.dumazet@...il.com>
> To: "John A. Sullivan III" <jsullivan@...nsourcedevel.com>
> Cc: netdev@...r.kernel.org
> Sent: Sunday, December 11, 2011 5:00:59 PM
> Subject: Re: IFB and bridges
>
> Le dimanche 11 décembre 2011 à 17:38 -0500, John A. Sullivan III a >
> > I know IFB is often used for ingress but I wasn't really thinking
> > of
> > ingress filtering. Let's say I have a 12 port Linux switch. If
> > any
> > of the ports become backlogged, I want them to prioritize time
> > sensitive traffic so I implement traffic shaping but I don't want
> > to
> > have to define my qdiscs, classes, and filters 12 times over if
> > they
> > are all the same. So I would direct each port to an IFB (not sure
> > if
> > that's intolerable overhead), have a single set of qdiscs, classes,
> > and filters, and, once those are applied, the packet arrives back
> > on
> > the same interface and proceeds assuming if has not been dropped or
> > delayed. - John
>
> Really ? How are you going to shape a single IFB device, if you
> really
> have independant 12 ports. (Its a switch, not a hub after all)
>
> A script can define your qdiscs/classes/filters hundred times, or one
> thousand times, and writing such a script is far more easier than
> setup
> IFB.
>
>
>
>
<grin> That's why I thought I'd ask the experts :) - John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists