lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 05 Mar 2010 09:22:00 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Dimitri Sivanich <sivanich@....com>
CC:	linux-kernel@...r.kernel.org,
	Rusty Russell <rusty@...tcorp.com.au>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH] improve stop_machine performance

Hello,

On 03/05/2010 06:20 AM, Dimitri Sivanich wrote:
> On systems with large cpu counts, we've been seeing long bootup times
> associated with stop_machine operations.  I've noticed that by simply
> removing the creation of the workqueue and associated percpu variables
> in subsequent stop_machine calls, we can reduce boot times on a
> 1024 processor SGI UV system from 25-30 (or more) minutes down to 12
> minutes.
> 
> The attached patch does this in a simple way by removing the
> stop_machine_destroy interface, thereby by leaving the workqueues and
> percpu variables for later use once they are created.
> 
> If people are against having these areas around after boot, maybe there
> are some alternatives that will still allow for this optimization:
> 
>  - Set a timer to go off after a configurable number of minutes, at
>    which point the workqueue areas will be deleted.
> 
>  - Keep the stop_machine_destroy function, but somehow run it at the tail
>    end of boot (after modules have loaded), rather than running it at
>    every stop_machine call.

Yeah, I can indeed imagine that creating and destroying all those
workers on every module load during boot would be very costly if there
are lots of CPUs.  How about sharing the migration thread so that it
serves as one-per-cpu uninterruptible RT simple thread pool?  It's not
like these things can run taking their turns anyway.  I'll go ahead
and make something up.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ