[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080825075744.GC2633@ff.dom.local>
Date: Mon, 25 Aug 2008 07:57:44 +0000
From: Jarek Poplawski <jarkao2@...il.com>
To: David Miller <davem@...emloft.net>
Cc: hadi@...erus.ca, alexander.duyck@...il.com,
jeffrey.t.kirsher@...el.com, jeff@...zik.org,
netdev@...r.kernel.org, alexander.h.duyck@...el.com
Subject: Re: [PATCH 3/3] pkt_sched: restore multiqueue prio scheduler
On Mon, Aug 25, 2008 at 12:48:25AM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@...il.com>
> Date: Mon, 25 Aug 2008 06:06:40 +0000
>
> > It seems the priority can really be misleading here. Do you mean these
> > hwqueues are internally prioritized too? This would be strange to me,
> > because why would we need this independent locking per hwqueue if
> > everything has to wait for the most prioritized hwqueue anyway? And,
> > if so, current dev_pick_tx() with simple_tx_hash() would always harm
> > some flows directing them to lower priority hwqueues?!
>
> Yes some can do internal prioritization in hardware.
>
> But even if not, this means even if the card does flow based
> multiqueue, this is still the right thing to do.
>
> Think about what actually happens on the wire as a result of
> our actions, rather than intuition :-)
>
> > But, even if it's true, let's take a look at fifo: a packet at the
> > head of the qdisc's queue could be hashed to the last hwqueue. If
> > it's stopped for some reason, this packed would be constantly
> > requeued blocking all other packets, while their hwqueues are ready
> > and empty!
>
> If we feed packets after the first one to the card, we would not
> be implementing a FIFO.
Not necessarilly so: if separate flows are hashed to "their" hwqueues,
a FIFO per flow would be still obeyed.
Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists