[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+b7TKbS6iaPLuo0iZStRJQ3BOBLTG1ZyC2nrh-=66bWDA@mail.gmail.com>
Date: Fri, 22 Jan 2016 22:38:40 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: David Howells <dhowells@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Miklos Szeredi <mszeredi@...e.cz>,
syzkaller <syzkaller@...glegroups.com>,
Kostya Serebryany <kcc@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>,
Robert Swiecki <swiecki@...gle.com>,
Kees Cook <keescook@...gle.com>
Subject: Re: fs: sandboxed process brings host down
On Fri, Jan 22, 2016 at 10:21 PM, Al Viro <viro@...iv.linux.org.uk> wrote:
> On Fri, Jan 22, 2016 at 10:06:14PM +0100, Dmitry Vyukov wrote:
>> Hello,
>>
>> While running syzkaller fuzzer I hit the following problem. Supervisor
>> process sandboxes worker processes that do random activities with
>> CLONE_NEWUSER | CLONE_NEWNS | CLONE_NEWPID | CLONE_NEWUTS |
>> CLONE_NEWNET | CLONE_NEWIPC | CLONE_IO, setrlimit, chroot, etc.
>> Because of that worker process gains ability to bring whole machine
>> down (does not happen without the sandbox).
>
> AFAICS, what you are doing is essentially mount --rbind / / in infinite
> loop in luserns. Which ends up eating all memory. There's any number
> of ways to do the same. We can play whack-a-mole with them until the
> kernel is completely ossified with accounting code of different sorts.
> Or one can disable userns and be done with that.
My 2GB VM dies at around just 10-th iteration, is it normal?
Each iteration consumes several hundreds of megs of kernel memory. And
there seems to be exponential slowdown at around 5-th iteration.
I understand that there can be lots of forms of a local DoS. But there
seems to be something pathological about this particular one. And it
happens only with sandboxing that is meant to reduce DoS
possibilities...
Powered by blists - more mailing lists