lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 07 Feb 2014 12:11:08 -0800
From:	Davidlohr Bueso <davidlohr@...com>
To:	m@...odev.com, akpm@...ux-foundation.org
Cc:	linux-kernel@...r.kernel.org,
	Manfred Spraul <manfred@...orfullife.com>, dledford@...hat.com
Subject: Re: Max number of posix queues in vanilla kernel  
 (/proc/sys/fs/mqueue/queues_max)

On Thu, 2014-02-06 at 12:21 +0200, m@...odev.com wrote:
> Hi Folks,
> 
> I have recently ported my multi-process application (like a classical open
> system) which uses POSIX Queues as IPC to one of the latest Linux kernels,
> and I have faced issue that number of maximum queues are dramatically
> limited down to 1024 (see include/linux/ipc_namespace.h, #define
> HARD_QUEUESMAX 1024).
> 
> Previously the max number of queues was INT_MAX (on 64bit system was:
> 2147483647).

Hmm yes, 1024 is quite unrealistic for some workloads and breaks
userspace - I don't see any reasons for _this_ specific value in the
changelog or related changes in the patchset that introduced commits
93e6f119 and 02967ea0. And the fact that this limit is per namespace
makes no difference really. Hell, if nothing else, the mq_overview(7)
manpage description is evidence enough. For privileged users:

The default value for queues_max is 256; it can be changed to any value in the range 0 to INT_MAX.

> 
> This update imposes bad limits on our multi-process application. As our
> app uses approaches that each process opens its own set of queues (usually
> something about 3-5 queues per process). In some scenarios we might run up
> to 3000 processes or more (which of-course for linux is not a problem).
> Thus we might need up to 9000 queues or more. All processes run under one
> user.
> 
> But now we have this limit, which limits our software down and we are
> getting in trouble. We could patch the kernel manually, but not all
> customers are capable of this and willing to do the patching.
> 
> Thus I *kindly* ask you guys to increase this limit to something like 1M
> queues or more (or to technical limit i.e. leave the same INT_MAX). If
> user can screw up the system by setting or using maximums, let it leave to
> the user. As it is doing system tuning and he is responsible for kernel
> parameters.
> 
> The kernel limit was introduced by:
> -
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=93e6f119c0ce8a1bba6e81dc8dd97d67be360844
> 
> Also I see other people are claiming issues with this, see:
> - https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 - for
> them some database software is not working after the kernel upgrade...

Surprised we didn't hear about this earlier by Michael Kerrisk. At least
the upstream manpages haven't been updated to reflect this new behavior,
it would have been the wrong way to go.

> 
> Also I think that when people will upgrade from RHEL 5 or RHEL 6 to next
> versions where this hard limit will be defined, I suspect that many will
> claim problem about it...

Agreed, RHEL 7 will ship with some baseline version of the 3.10 kernel
and users will be exposed to this. Of course, the same goes for just
about any distro, and Ubuntu users are already complaining about it.

I believe that instead of bumping up this HARD limit of 1024, we should
go back to the original behavior. If we just increase it, instead, then
how high is high enough?

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ