[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080731193204.GG9663@sgi.com>
Date: Thu, 31 Jul 2008 14:32:04 -0500
From: Robin Holt <holt@....com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: Robin Holt <holt@....com>, linux-kernel@...r.kernel.org,
Pavel Emelyanov <xemul@...nvz.org>,
Oleg Nesterov <oleg@...sign.ru>,
Sukadev Bhattiprolu <sukadev@...ibm.com>,
Paul Menage <menage@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [Patch] Scale pidhash_shift/pidhash_size up based on
num_possible_cpus().
On Thu, Jul 31, 2008 at 11:35:19AM -0700, Eric W. Biederman wrote:
> Robin Holt <holt@....com> writes:
>
> > For large cpu configurations, we find the number of pids in a pidhash
> > bucket cause things like 'ps' to perform slowly. Raising pidhash_shift
> > from 12 to 16 cut the time for 'ps' in half on a 2048 cpu machine.
> >
> > This patch makes the upper limit scale based upon num_possible_cpus().
> > For machines 128 cpus or less, the current upper limit of 12 is
> > maintained.
>
> It looks like there is a magic limit we are dancing around.
>
> Can we please make the maximum for the hash table size be based
> on the maximum number of pids. That is fls(PID_MAX_LIMIT) - 6?
I am happy to base it upon whatever you think is correct. So long as it
goes up for machines with lots of cpus, that will satisfy me. It is
probably as much a problem on smaller machines, but if you have _THAT_
many pids in use, you are probably oversubscribing many other resources
and don't really care. That limit will essentially become a constant
(compiler may even do that for us but I have not checked any arch other
that ia64). Should I just replace the 12 with a 16 or 17 or some new
magic number?
Thanks,
Robin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists