[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080825082501.GD2633@ff.dom.local>
Date: Mon, 25 Aug 2008 08:25:02 +0000
From: Jarek Poplawski <jarkao2@...il.com>
To: David Miller <davem@...emloft.net>
Cc: hadi@...erus.ca, alexander.duyck@...il.com,
jeffrey.t.kirsher@...el.com, jeff@...zik.org,
netdev@...r.kernel.org, alexander.h.duyck@...el.com
Subject: Re: [PATCH 3/3] pkt_sched: restore multiqueue prio scheduler
On Mon, Aug 25, 2008 at 01:02:06AM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@...il.com>
> Date: Mon, 25 Aug 2008 07:57:44 +0000
>
> > On Mon, Aug 25, 2008 at 12:48:25AM -0700, David Miller wrote:
> > > From: Jarek Poplawski <jarkao2@...il.com>
> > > Date: Mon, 25 Aug 2008 06:06:40 +0000
> > >
> > > If we feed packets after the first one to the card, we would not
> > > be implementing a FIFO.
> >
> > Not necessarilly so: if separate flows are hashed to "their" hwqueues,
> > a FIFO per flow would be still obeyed.
>
> What appears on the wire is still going to be similar.
>
> You have to subsequently ask if it's worth the complexity to do
> what you seem to be proposing.
>
> When a single hardware queue fills up, it's the SAME, semantically,
> as when a unary TX queue of a traditional device fills up.
>
> There is NO on the wire difference. There will be NO performance
> difference, because the device will have work to do as by definition
> of one TX queue being full there are some packets queued up to
> the device.
If with unary TX queue we had to fill one bigger queue (or all TX
queues) before device stopped a qdisc, and with mq TX it's enough to
have one TX filled to effectively stop a qdisc transmit, IMHO there
should be a performance difference.
Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists