lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 08 Sep 2008 09:03:02 -0700
From:	Mike Travis <>
To:	Andi Kleen <>
CC:	Ingo Molnar <>,
	Andrew Morton <>,, David Miller <>,
	Eric Dumazet <>,
	"Eric W. Biederman" <>,
	Jack Steiner <>,
	Jeremy Fitzhardinge <>,
	Jes Sorensen <>, "H. Peter Anvin" <>,
	Thomas Gleixner <>,
Subject: Re: [RFC 09/13] genapic: reduce stack pressuge in io_apic.c step
 1 temp cpumask_ts

Andi Kleen wrote:
> Mike Travis <> writes:
>>   * Step 1 of cleaning up io_apic.c removes local cpumask_t variables
>>     from the stack.
> Sorry that patch seems incredibly messy. Global variables
> and a tricky ordering and while it's at least commented it's still a mess
> and maintenance unfriendly.
> Also I think set_affinity is the only case where a truly arbitary cpu
> mask can be passed in anyways. And it's passed in from elsewhere. 
> The other cases generally just want to handle a subset of CPUs which
> are nearby. How about you define a new cpumask like type that 
> consists of a start/stop CPU and a mask for that range only 
> and is not larger than a few words?
> I think with that the nearby assignments could be handled 
> reasonably cleanly with arguments and local variables.
> And I suspect with some restructuring set_affinity could
> be also made to support such a model.
> -Andi

Thanks for the comments.  I did mull over something like this early on
in researching this "cpumask" problem, but the problem of having different
cpumask operators didn't seem worthwhile.  But perhaps for a very limited
use (with very few ops), it would be worthwhile.

But how big to make these?  Variable sized?  Config option?  Should I
introduce some kind of MAX_CPUS_PER_NODE constant?  (NR_CPUS/MAX_NUMNODES
I don't think is the right answer.)

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists