lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 04 Aug 2008 17:38:56 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Robin Holt <holt@....com>
Cc:	Stephen Champion <schamp@....com>, linux-kernel@...r.kernel.org,
	Pavel Emelyanov <xemul@...nvz.org>,
	Oleg Nesterov <oleg@...sign.ru>,
	Sukadev Bhattiprolu <sukadev@...ibm.com>,
	Paul Menage <menage@...gle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [Patch] Scale pidhash_shift/pidhash_size up based on num_possible_cpus().

Robin Holt <holt@....com> writes:

> But if we simply scale based upon num_possible_cpus(), we get a relatively
> representative scaling function.  Usually, customers buy machines with 1,
> 2, or 4GB per cpu.  I would expect a waste of 256k, 512k, or even 1m to
> be acceptable at this size of machine.

For your customers, and your kernel thread workload, you get a
reasonable representation.  For other different people and different
workloads you don't.  I happen to know of a completely different
class of workload that can do better.

> For 2.6.27, would you accept an upper cap based on the memory size
> algorithm you have now and adjusted for num_possible_cpus()?  Essentially
> the first patch I posted.

I want to throw a screaming hissy fit.

The merge window has closed.  This is not a bug.  This is not a
regression.  I don't see a single compelling reason to consider this
for 2.6.27.  I asked for clarification so I could be certain you were
solving the right problem.

Why didn't these patches show up 3 months ago when the last merge
window closed?  Why not even earlier?

I totally agree that what we are doing could be done better, however
at this point we should be looking at 2.6.28.  In which case looking
at the general long term non-hack solution is the right way to go.  Can
we scale to different workloads?

For everyone with less then 4K cpus the current behavior is fine, and
with 4k cpus it results in a modest slowdown.  This sounds useable.

You have hit an extremely sore spot with me.  Anytime someone makes an
argument that I hear as RHEL is going to ship 2.6.27 so we _need_ this
patch in 2.6.27 I want to stop listening.  I just don't care.  Unfortunately
I have heard that argument almost once a day for the last week, and I am
tired of it.

Why hasn't someone complained that waitpid is still slow?

Why haven't we seen patches to reduce the number of kernel threads since
last time you had problems with the pid infrastructure?

A very frustrated code reviewer.

So yes.  If you are not interested in 2.6.28 and in the general problem,
I'm not interested in this problem.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists