[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANd1uZ=LZXtG9n+w+DA+z2N5p=oiBJ506=2VnyeNXU18=LobSQ@mail.gmail.com>
Date: Tue, 6 Jan 2015 13:27:21 +0000
From: Jay Foad <jay.foad@...il.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Greg KH <gregkh@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"# .39.x" <stable@...nel.org>, stable-review@...nel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>
Subject: Re: [2/3] mm: fix up some user-visible effects of the stack guard page
On 5 January 2015 at 21:03, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> On Mon, Jan 5, 2015 at 2:21 AM, Jay Foad <jay.foad@...il.com> wrote:
>> Sorry for replying to this old email...
>>
>>> commit d7824370e26325c881b665350ce64fb0a4fde24a upstream
>
> Heh. From august 2010. That's 4+ years ago.. How come it was noticed
> only now?
*Not* because we've just upgraded from an ancient kernel. Rather
because we've started running the asan tests on powerpc64, which has
64k pages rather than 4k, which just happens to trigger the failure. A
bit of git blame detective work lead me to this ancient commit.
>> Address sanitizer tries to find the mapping for the current thread's
>> stack by iterating through the entries in /proc/self/maps looking for
>> one that contains the address of some random stack variable. This
>> fails if the stack mapping has already used up all of its RLIMIT_STACK
>> quota, because in that case check_stack_guard_page() will fail to add
>> a guard page, but show_map_vma() will still assume that the first page
>> of the stack *is* a guard page, and won't report it in /proc/maps.
>>
>> Here's a small program that demonstrates the failure:
>
> Yup, your analysis sounds correct. My completely untested gut feel is
> that the problem is that we don't actually return the error from the
> expand_stack() call, so then do_anonymous_page() will allow the extra
> guard-page access.
>
> IOW, *maybe* a patch like this. TOTALLY UNTESTED! I may have missed
> something, and this may be complete crap.
(Later)
> .. and ti's still untested, but I'm pretty sure it's right - except we
> should probably also allow an extra page for the stack rlimit (so that
> the guard page doesn't count towards the limit).
(Later)
> It does indeed seem to work.
>
> But see my other email about this effectively making the stack limit
> one page smaller. I'm not entirely sure it's worth fixing, but maybe
> in another four years somebody will post a regression email about that
> ;)
>
> IOW, I'd be inclined to say "stack limit includes guard page", and
> just wait and see whether that causes any issues.
I don't have a problem with that, but I suppose some users might have
to bump their stack limit up by a page, if they were already sailing
close to the wind.
> But regardless, I'd like you to test the patch in your environment
> (with the full real load, not just the demonstration test case), since
> you are clearly doing special things. My limited testing was just that
> - very limited.
I can't test kernel patches on the powerpc64 box as I don't even have
root access, but I've done the best I can on my own x86_64 machine:
- rebuilt my current 3.16.0 kernel both without and with your patch
- tested both my small repro and the original asan stack-overflow test
with a variety of small stack limits (12, 16, 20 k)
- run the full address sanitizer test suite
All looks good *except* that stack overflow tests are now getting
SIGBUS where they used to get SIGSEGV. Asan was only handling SIGSEGV,
but it seems entirely reasonable for it to handle SIGBUS in the same
way too (indeed it was already doing that on Darwin, just not on
Linux). If I make that change then the whole test suite runs cleanly
with your patch.
Thus:
Tested-by: Jay Foad <jay.foad@...il.com>
Thanks,
Jay.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists