lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061030144259.GD10235@parisc-linux.org>
Date:	Mon, 30 Oct 2006 07:42:59 -0700
From:	Matthew Wilcox <matthew@....cx>
To:	Kyle Moffett <mrmacman_g4@....com>
Cc:	Linus Torvalds <torvalds@...l.org>,
	"Adam J. Richter" <adam@...drasil.com>, akpm@...l.org,
	bunk@...sta.de, greg@...ah.com, linux-kernel@...r.kernel.org,
	linux-pci@...ey.karlin.mff.cuni.cz, pavel@....cz,
	shemminger@...l.org
Subject: Re: [patch] drivers: wait for threaded probes between initcall levels

On Mon, Oct 30, 2006 at 09:23:10AM -0500, Kyle Moffett wrote:
> recursive make invocations and nested directories).  Likewise in the  
> context of recursively nested busses and devices; multiple PCI  
> domains, USB, Firewire, etc.

I don't think you know what a PCI domain is ...

> Well, perhaps it does.  If I have (hypothetically) a 64-way system  
> with several PCI domains, I should be able to not only start scanning  
> each PCI domain individually,  but once each domain has been scanned  
> it should be able to launch multiple probing threads, one for each  
> device on the PCI bus.  That is, assuming that I have properly set up  
> my udev to statically name devices.

There's still one spinlock that protects *all* accesses to PCI config
space.  Maybe we should make it one per PCI root bridge or something,
but even that wouldn't help some architectures.

> I admit the complexity is a bit high, but since the maximum nesting  
> is bounded by the complexity of the hardware and the number of  
> busses, and the maximum memory-allocation is strictly limited in the  
> single-threaded case this could allow 64-way systems to probe all  
> their hardware an order of magnitude faster than today without  
> noticeably impacting an embedded system even in the absolute worst case.

To be honest, I think just scaling PARALLEL to NR_CPUS*4 or something
would be a reasonable way to go.

If people actually want to get serious about this, I know the PPC folks
have some openfirmware call that tells them about power domains and how
many scsi discs they can spin up at one time (for example).  Maybe
that's not necessary; if we can figure out what the system's max power
draw is and how close we are to it, we can decide whether to spawn
another thread or not.

It's quite complicated.  You can spin up a disc over *here*, but not
over *there* ... this really is a gigantic can of worms being opened.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ