lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0610271323020.3849@g5.osdl.org>
Date:	Fri, 27 Oct 2006 13:42:44 -0700 (PDT)
From:	Linus Torvalds <torvalds@...l.org>
To:	Andrew Morton <akpm@...l.org>
cc:	Stephen Hemminger <shemminger@...l.org>,
	Pavel Machek <pavel@....cz>, Greg KH <greg@...ah.com>,
	Matthew Wilcox <matthew@....cx>, Adrian Bunk <bunk@...sta.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-pci@...ey.karlin.mff.cuni.cz
Subject: Re: [patch] drivers: wait for threaded probes between initcall levels



On Fri, 27 Oct 2006, Andrew Morton wrote:
> 
> I couldn't work out a way of doing that.  I guess one could a) count the
> number of threads which are going to be started, b) start them all, c) do
> an up() when each thread ends and d) handle errors somehow.

No. First off, you want to _limit_ the maximum number of parallelism 
anyway (memory pressure and sanity), so you want to use the counting 
semaphore for that too.

The easiest way to do it would probably be something like this:

	#define PARALLELISM (10)

	static struct semaphore outstanding;

	struct thread_exec {
		int (*fn)(void *);
		void *arg;
		struct completion completion;
	};

	static void allow_parallel(int n)
	{
		while (--n >= 0)
			up(&outstanding);
	}

	static void wait_for_parallel(int n)
	{
		while (--n >= 0)
			down(&outstanding);
	}

	static int do_in_parallel(void *arg)
	{
		struct thread_exec *p = arg;
		int (*fn)(void *) = p->fn;
		void *arg = p->arg;
		int retval;

		/* Tell the caller we are done with the arguments */
		complete(&p->completion);

		/* Do the actual work in parallel */
		retval = p->fn(p->arg);

		/*
		 * And then tell the rest of the world that we've
		 * got one less parallel thing outstanding..
		 */
		up(&outstanding);
		return retval;
	}

	static void execute_in_parallel(int (*fn)(void *), void *arg)
	{
		struct thread_exec arg = { .fn = fn, .arg = arg };

		/* Make sure we can have more outstanding parallel work */
		down(&outstanding);

		arg.fn = fn;
		arg.arg = arg;
		init_completion(&arg.completion);

		kernel_thread(do_in_parallel, &arg);

		/* We need to wait until our "arg" is safe */
		wait_for_completion(&arg.completion)
	}

The above is ENTIRELY UNTESTED, but the point of it is that it should now 
allow you to do something like this:

	/* Set up how many parallel threads we can run */
	allow_parallel(PARALLELISM);

	...

	/*
	 * Run an arbitrary number of threads with that
	 * parallelism.
	 */
	for (i = 0; i < ... ; i++)
		execute_in_parallel(fnarray[i].function, 
				    fnarray[i].argument);

	...

	/* And wait for all of them to complete */
	wait_for_parallel(PARALLELISM);

and this is totally generic (ie this is useful for initcalls or anything 
else). Note also how you can set up the parallelism (and wait for it) 
totally independently (ie that can be done at some earlier stage, and the 
"execute_in_parallel()" can just be executed in any random situation in 
between - as many times as you like. It will always honor the parallelism.

By setting PARALLELISM to 1, you basically only ever allow one outstanding 
call at any time (ie it becomes serial), so you don't even have to make 
this a config option, you could do it as a runtime setup thing.

Hmm?

(And I repeat: the above code is untested, and was written in the email 
client. It has never seen a compiler, and not gotten a _whole_ lot of 
thinking).

		Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ