lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Jun 2007 09:52:38 +0000
From:	"Huang, Ying" <ying.huang@...el.com>
To:	Stefan Richter <stefanr@...6.in-berlin.de>
Cc:	Greg K-H <greg@...ah.com>,
	Cornelia Huck <cornelia.huck@...ibm.com>,
	Adrian Bunk <bunk@...sta.de>, david@...g.hm,
	David Miller <davem@...emloft.net>,
	Duncan Sands <duncan.sands@...h.u-psud.fr>,
	Phillip Susi <psusi@....rr.com>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] driver core: multithreaded probing - more parallelism
	control

On Thu, 2007-06-21 at 18:21 +0200, Stefan Richter wrote:
> Parallelism between subsystems may be interesting during boot ==
> "coldplug", /if/ the machine has time-consuming devices to probe on
> /different/ types of buses.  Of course some machines do the really
> time-consuming stuff on only one type of bus.  Granted, parallelism
> betwen subsystems is not very interesting anymore later after boot ==
> "hotplug".

Yes. So I think there are two possible solution.

1. Creating one set of probing queues for each subsystem (maybe just the
subsystems need it), so the probing queue IDs are local to each
subsystem.
2. There is only one set of probing queues in whole system. The probing
queue IDs are shared between subsystems. The subsystem can select a
random starting queue ID (maybe named as start_queue_id), and allocate
the queue IDs from that point on (start_queue_id + private_queue_id). So
the probability of queue ID sharing will be reduced.

> (The old FireWire stack will re-enter the main loop of the bus scanning
> thread sometime after a bus reset event signaled that nodes or units may
> have appeared or disappeared.  The new FireWire stack will schedule
> respective scanning workqueue jobs after such an event.)

I think the workqueue is better than kernel thread here. With kernel
thread, the nodes and units may be needed to be scanned again and again
for each unit/node if many units/nodes are appeared at almost the same
time, while with workqueue, just schedule the jobs needed.

And a workqueue like the probing queue whose thread can be
created/destroyed on demand will save more resources than ordinary
workqueue. :)

Best Regards,
Huang Ying
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ