lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 15 Apr 2010 20:54:32 -0700
From:	"George B." <georgeb@...il.com>
To:	Jay Vosburgh <fubar@...ibm.com>
Cc:	netdev@...r.kernel.org
Subject: Re: Network multiqueue question

On Thu, Apr 15, 2010 at 11:09 AM, Jay Vosburgh <fubar@...ibm.com> wrote:


>        The question I have about it (and the above patch), is: what
> does multi-queue "awareness" really mean for a bonding device?  How does
> allocating a bunch of TX queues help, given that the determination of
> the transmitting device hasn't necessarily been made?

Good point.

>        I haven't had the chance to acquire some multi-queue network
> cards and check things out with bonding, so I'm not really sure how it
> should work.  Should the bond look, from a multi-queue perspective, like
> the largest slave, or should it look like the sum of the slaves?  Some
> of this is may be mode-specific, as well.

I would say that having the number of bands be either the number of
cores or 4, whichever is the smaller would be a good start.  That is
probably fine for GigE.  Of the network cards we have that support
multiqueue, they are either 4 or 8 bands.  In an optimal world, you
would have the number of bands that you have available at the physical
ethernet level but changing those on the fly in case of a change in
available interfaces might be more trouble than it is worth.

Four or eight would seem to be a good number to start with as I don't
think I have seen an ethernet card with less than 4.  If you have
fewer than 4 CPUs there probably isn't much utility in having more
bands than processors, or maybe that utility rapidly diminishes as the
number of bands increases beyond the number of CPUs.  At that point
you have probably just spent a lot of work building a bigger buffer.

I would be happy with 4 bands.  I guess it just depends on where you
want the bottleneck.  If you have 8 bands on the bond driver (another
reasonable alternative) and only 4 bands available for output, you
have just moved the contention down a layer to between the bond and
the ethernet driver.  But I am a fan of moving the point of contention
as far away from the application interface as possible.  If I have one
big lock around the bond driver and have 6 things waiting to talk to
the network, those are six things that can't be doing anything else.
I would rather have the application handle its network task and get
back to other things.  Now if you have 8 bands of bond and only 4
bands of ethernet, or even one band of ethernet, oh well.  Maybe have
1 to 8 bands configurable by an option to the driver that could be set
explicitly and defaults to, say, 4?

Thanks for taking the time to answer.

George
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ