lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071007.215124.85709188.davem@davemloft.net>
Date:	Sun, 07 Oct 2007 21:51:24 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	hadi@...erus.ca
Cc:	peter.p.waskiewicz.jr@...el.com, krkumar2@...ibm.com,
	johnpol@....mipt.ru, herbert@...dor.apana.org.au, kaber@...sh.net,
	shemminger@...ux-foundation.org, jagana@...ibm.com,
	Robert.Olsson@...a.slu.se, rick.jones2@...com, xma@...ibm.com,
	gaagaan@...il.com, netdev@...r.kernel.org, rdreier@...co.com,
	mcarlson@...adcom.com, jeff@...zik.org, mchan@...adcom.com,
	general@...ts.openfabrics.org, kumarkr@...ux.ibm.com,
	tgraf@...g.ch, randy.dunlap@...cle.com, sri@...ibm.com
Subject: Re: [PATCH 1/4] [NET_SCHED] explict hold dev tx lock

From: jamal <hadi@...erus.ca>
Date: Mon, 24 Sep 2007 19:38:19 -0400

> How is the policy to define the qdisc queues locked/mapped to tx rings? 

For these high performance 10Gbit cards it's a load balancing
function, really, as all of the transmit queues go out to the same
physical port so you could:

1) Load balance on CPU number.
2) Load balance on "flow"
3) Load balance on destination MAC

etc. etc. etc.

It's something that really sits logically between the qdisc and the
card, not something that is a qdisc thing.

In some ways it's similar to bonding, but using anything similar to
bonding's infrastructure (stacking devices) is way overkill for this.

And then we have the virtualization network devices where the queue
selection has to be made precisely, in order for the packet to
reach the proper destination, rather than a performance improvement.
It is also a situation where the TX queue selection is something
to be made between qdisc activity and hitting the device.

I think we will initially have to live with taking the centralized
qdisc lock for the device, get in and out of that as fast as possible,
then only take the TX queue lock of the queue selected.

After we get things that far we can try to find some clever lockless
algorithm for handling the qdisc to get rid of that hot spot.

These queue selection schemes want a common piece of generic code.  A
set of load balancing algorithms, a "select TX queue by MAC with a
default fallback on no match" for virtualization, and interfaces for
both drivers and userspace to change the queue selection scheme.

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ