[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFzDFD6dnRvUFW93r3TGaV9LVw6bTSABvr7vv2jGZdBAjQ@mail.gmail.com>
Date: Fri, 3 Nov 2017 18:22:19 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Kees Cook <keescook@...omium.org>
Cc: Rob Landley <rob@...dley.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
toybox@...ts.landley.net, enh@...il.com
Subject: Re: Regression: commit da029c11e6b1 broke toybox xargs.
On Fri, Nov 3, 2017 at 5:42 PM, Kees Cook <keescook@...omium.org> wrote:
>
> If we didn't do the "but no more than 75% of _STK_LIM", and moved to
> something like "check stack utilization after loading the binary", we
> end up in the position where the kernel is past the point of no return
> (so instead of E2BIG, the execve()ing process just SEGVs), which is
> much harder to debug or recover from (i.e. there's no process left to
> return from the execve() from).
Yeah, we've had that problem in the past, and it's the worst of all worlds.
You can still trigger it (set RLIMIT_DATA to something much too small,
for example, and then generate more than that by just repeating the
same argument multiple times so that the execve() user doesn't trigger
the limit, but the newly executed process does).
But it should really be something that you need to be truly insane to trigger.
I think we still don't know whether we're going to be suid at the time
we copy the arguments, do we?
So it's pretty painful to make the limits different for suid and
non-suid binaries.
Linus
Powered by blists - more mailing lists