[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 06 Jun 2007 15:40:41 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: peter.p.waskiewicz.jr@...el.com
Cc: hadi@...erus.ca, kaber@...sh.net, netdev@...r.kernel.org,
jeff@...zik.org, auke-jan.h.kok@...el.com
Subject: Re: [PATCH] NET: Multiqueue network device support.
From: "Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>
Date: Wed, 6 Jun 2007 15:30:39 -0700
> > [Which of course leads to the complexity (and not optimizing
> > for the common - which is single ring NICs)].
>
> The common for 100 Mbit and older 1Gbit is single ring NICs. Newer
> PCI-X and PCIe NICs from 1Gbit to 10Gbit support multiple rings in the
> hardware, and it's all headed in that direction, so it's becoming the
> common case.
I totally agree. No modern commodity 1gb and faster card is going
to be without many queues on both TX and RX.
> > Iam actually not against the subqueue control - i know Peter
> > needs it for certain hardware; i am just against the mucking
> > around of the common case (single ring NIC) just to get that working.
>
> Single-ring NICs see no difference here. Please explain why using my
> patches with pfifo_fast, sch_prio, or any other existing qdisc will
> change the behavior for single-ring NICs?
I agree with the implication here, there is no penalty for existing
devices.
There are two core issues in my mind:
1) multi-queue on both RX and TX is going to be very pervasive very
soon, everyone is putting this into silicon.
The parallelization gain potential is enormous, and we have to
design for this.
2) Queues are meant to be filled as much as possible, you can't do
that by having only one qdisc attached to the device indicating
unary full status, you simply can't.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists