lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 08 Sep 2008 09:03:02 -0700 From: Mike Travis <travis@....com> To: Andi Kleen <andi@...stfloor.org> CC: Ingo Molnar <mingo@...e.hu>, Andrew Morton <akpm@...ux-foundation.org>, davej@...emonkey.org.uk, David Miller <davem@...emloft.net>, Eric Dumazet <dada1@...mosbay.com>, "Eric W. Biederman" <ebiederm@...ssion.com>, Jack Steiner <steiner@....com>, Jeremy Fitzhardinge <jeremy@...p.org>, Jes Sorensen <jes@....com>, "H. Peter Anvin" <hpa@...or.com>, Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org Subject: Re: [RFC 09/13] genapic: reduce stack pressuge in io_apic.c step 1 temp cpumask_ts Andi Kleen wrote: > Mike Travis <travis@....com> writes: > >> * Step 1 of cleaning up io_apic.c removes local cpumask_t variables >> from the stack. > > Sorry that patch seems incredibly messy. Global variables > and a tricky ordering and while it's at least commented it's still a mess > and maintenance unfriendly. > > Also I think set_affinity is the only case where a truly arbitary cpu > mask can be passed in anyways. And it's passed in from elsewhere. > > The other cases generally just want to handle a subset of CPUs which > are nearby. How about you define a new cpumask like type that > consists of a start/stop CPU and a mask for that range only > and is not larger than a few words? > > I think with that the nearby assignments could be handled > reasonably cleanly with arguments and local variables. > > And I suspect with some restructuring set_affinity could > be also made to support such a model. > > -Andi Thanks for the comments. I did mull over something like this early on in researching this "cpumask" problem, but the problem of having different cpumask operators didn't seem worthwhile. But perhaps for a very limited use (with very few ops), it would be worthwhile. But how big to make these? Variable sized? Config option? Should I introduce some kind of MAX_CPUS_PER_NODE constant? (NR_CPUS/MAX_NUMNODES I don't think is the right answer.) Thanks, Mike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists