lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 5 Feb 2014 17:05:33 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Dan Williams' <dan.j.williams@...el.com>
CC:	Mathias Nyman <mathias.nyman@...ux.intel.com>,
	"linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>,
	"sarah.a.sharp@...ux.intel.com" <sarah.a.sharp@...ux.intel.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [RFCv2 00/10] xhci: re-work command queue management

From: Dan Williams
> Yes, but I think we need to centralize the context under which
> commands are submitted.  The complicating factor is the mix of
> synchronous command submission and interrupt driven asynchronous
> command queuing.  I think we can simplify it by making it all
> submitted from a single event queue context.  I'm investigating if
> that is a workable solution...

Given that the entire network stack runs from (I believe) 'process level'
contexts (NAPI functions), it is rather surprising that the USB stack
runs everything from the actual interrupt context (or in the callers
context with interrupts disabled).

For large SMP systems disabling interrupts and acquiring global locks
like that just doesn't scale.

Even for 'normal' URB it ought to be possible to queue them for later
submission - rather than having to immediately put them on the transfer
ring.

Ideally the 'event' ring ought to be large enough for all the events
that the controller can add. This is approximately 1 per TD (2 for
short RX) and one per command. So much like the 'reserved command'
slots (which aren't implemented correctly) there should probably be
'reserved event' slots - otherwise the event ring might become full
(if the host stops servicing interrupts for any length of time.)
Avoiding the need for large numbers of events really means restricting
the number of TD (that request interrupts).

isoc and disk might limit this anyway, but network traffic can queue
a much larger number of TD. I've seen 20 tx TD (all nearly 60kB)
queued. Every time one completes another is immediately added.
usbnet needs to be taught how to do BQL, and something (somehow)
reduce the number of interrupts.

(end of ramble)

	David



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ