lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <561896C0.20600@mogujie.com> Date: Sat, 10 Oct 2015 12:40:32 +0800 From: Zhang Haoyu <yuzhou@...ujie.com> To: Zefan Li <lizefan@...wei.com> Cc: containers@...ts.linux-foundation.org, LKML <linux-kernel@...r.kernel.org> Subject: Re: pidns: Make pid accounting and pid_max per namespace On 10/10/15 11:35, Zefan Li wrote: > On 2015/10/9 18:29, Zhang Haoyu wrote: >> I started multiple docker containers in centos6.6(linux-2.6.32-504.16.2), >> and there's one bad program was running in one container. >> This program produced many child threads continuously without free, so more and >> more pid numbers were consumed by this program, until hitting the pix_max limit (32768 >> default in my system ). >> >> What's worse is that containers and host share the pid numbers resource, so new program >> cannot be produced any more in host and other containers. >> >> And, I clone the upstream kernel source from >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git >> This problem is still there, I'm not sure. >> >> IMO, we should isolate the pid accounting and pid_max between pid namespaces, >> and make them per pidns. >> Below post had request for making pid_max per pidns. >> http://thread.gmane.org/gmane.linux.kernel/1108167/focus=1111210 >> > > Mainline kernel already supports per-cgroup pid limit, which should solve > your problem. > What about pid accounting? If one pidns consume too many pids, dose it influence the other pid namespaces? Thanks, Zhang Haoyu -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists