lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87prsk86d5.fsf@basil.nowhere.org>
Date:	Mon, 21 Apr 2008 01:16:22 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	Daniel Hazelton <dhazelton@...er.net>,
	Adrian Bunk <bunk@...nel.org>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Shawn Bohrer <shawn.bohrer@...il.com>,
	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: x86: 4kstacks default

Arjan van de Ven <arjan@...radead.org> writes:
>
> it is you who keeps putting up the 50k argument.

See the links I posted and quote in an earlier message up the thread if you
don't remember what you wrote yourself.

I originally only hold up the fragmentation argument (or rather only
argued against it), until I was corrected by both Ingo and you in the
earlier thread and you both insisted that 50k threads were the real
reason'd'etre for 4k stacks.  

You're saying that was wrong and the fragmentation issue was really the 
real reason for 4k stacks? If both you and Ingo can agree on that
I would be happy to forget the 50k threads :)

> What I'm talking about is in the 10k to 20k range; and that is actual workloads
> by real customers.

On a 32bit kernel? 

My estimate is that you need around 32k for a functional blocked thread
in a network server (8k + 2*4k for poll with large fd table and wait queues + 
some pinned dentries and inodes + misc other stuff). With 20k you're 625MB into
your lowmem which leaves about 200MB left on a 3:1 system with 16GB 
(and ~128MB mem_map).  That might work for some time, but I expect it will fall
over at some point because there is just too much pinned lowmem
and not enough left for other stuff (like networking buffers etc.) 

10k sounds more doable. But again do 4k more or less make 
a big difference with the other thread overhead? I don't think so.

And trading reliability (and functionality -- you basically have to
cut off XFS)just for 4k/thread doesn't seem like good bargain to
me. Especially with kernel code getting more complicated all the time.

>> I don't see any evidence that there are serious order 1 fragmentation 
>> issues on 2.6. 
>
> I assume you're not asking me to give you customer confidential data from a previous job in public ;)

Well if it is that serious a problem surely it will have hit some public
bugzillas or mailing lists?  Arguing with something secret is also not 
very useful.

Also I find it always important to reevaluate assumptions when new 
facts come up. In this case we should reevaluate a decision that made
sense[1] in 2.4 with the new facts of 2.6 (e.g. new VM with much better
reclaim)

[1] refering to the fragmentation argument, not the 50k threads which
were always unrealistic.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ