[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1280227867.1970.208.camel@pasglop>
Date: Tue, 27 Jul 2010 20:51:07 +1000
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: netdev <netdev@...r.kernel.org>
Subject: Tx queue selection
Hi folks !
I'm putting my newbie hat on ... :-)
While looking at our ehea driver (and in fact another upcoming driver
I'm helping with), I noticed it's using the "old style" multiqueue. IE.
It doesn't use the alloc_netdev_mq() variant, creates one queue on the
linux side, an makes its own selection of HW queue in start_xmit.
This had many drawbacks, obviously, such as not getting per-queue locks
etc...
Now, the mechanics of converting that to the new scheme are easy enough
to figure out by reading the code. However, where my lack of networking
background fails me is when it comes to the policy of choosing a Tx
queue.
ehea uses its own hash of the header, different from the "default" queue
selection in the net core. Looking at other drivers such as ixgbe, I see
that it can chose to use smp_processor_id() when a flag is set for which
I don't totally understand the meaning or default to the core algorithm.
Now, while I can understand why it's a good idea to use the current
processor, in order to limit cache ping pong etc... I'm not really
confident I understand the pro/cons of using the hashing for tx. I
understand that the net core can play interesting games with associating
sockets with queues etc... but I'm a bit at a loss when it comes to
deciding what's best for this driver. I suppose I could start by
implementing my own queue selection based on what ehea does today but I
have the nasty feeling that's going to be sub-optimal :-)
So I would very much appreciate (and reward with free beer at the next
conference) if somebody could give me a bit of a heads up on how things
are expected to be done there, pro/cons, perf impact etc...
Thanks in avance !
Cheers,
Ben.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists