lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Sat, 20 Jun 2009 13:32:47 +0400
From:	Andrey Borzenkov <arvidjaar@...l.ru>
To:	linux-kernel@...r.kernel.org
Subject: Number of open files scalability?

Hi,

we have a customer that requires large number of open files. Basically, 
it is SAP with large Oracle database with relatively large number of 
concurrent connections from worker processes. Right now the amount 
permanently opened files is above 128000; with current trends of DB and 
load growth it could easily rocket up to and above of 1000000.

So the questions are

- is there any per-process or per-user limit for number of open files 
imposed by kernel (except of course set by rlimits)?

- is there any fs/file-max limit except imposed by data type (int)?

- finally, how scalable is the implementation? Will having one million 
of open files impose any noticeable slowdown? If yes, what operations 
are affected? I.e. opening new files/creating new process is not that 
important; but having to search 1000000 files for every operation would 
be fatal.

The platform is x86_64, SLES 9 with likely update to SLES10.

Thank you!

-andrey

Download attachment "signature.asc " of type "application/pgp-signature" (198 bytes)

Powered by blists - more mailing lists