[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jLFbgEVhzpNNiSBAT-QoMYamx9o3dYqTJHhDihEtmuReA@mail.gmail.com>
Date: Fri, 19 Aug 2016 11:00:12 -0700
From: Kees Cook <keescook@...omium.org>
To: Rik van Riel <riel@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Laura Abbott <labbott@...oraproject.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
kernel test robot <xiaolong.ye@...el.com>
Subject: Re: [PATCH] usercopy: Skip multi-page bounds checking on SLOB
On Thu, Aug 18, 2016 at 11:02 AM, Rik van Riel <riel@...hat.com> wrote:
> On Thu, 2016-08-18 at 10:42 -0700, Linus Torvalds wrote:
>> On Thu, Aug 18, 2016 at 7:21 AM, Rik van Riel <riel@...hat.com>
>> wrote:
>> >
>> > One big question I have for Linus is, do we want
>> > to allow code that does a higher order allocation,
>> > and then frees part of it in smaller orders, or
>> > individual pages, and keeps using the remainder?
>>
>> Yes. We've even had people do that, afaik. IOW, if you know you're
>> going to allocate 16 pages, you can try to do an order-4 allocation
>> and just use the 16 pages directly (but still as individual pages),
>> and avoid extra allocation costs (and to perhaps get better access
>> patterns if the allocation succeeds etc etc).
>>
>> That sounds odd, but it actually makes sense when you have the order-
>> 4
>> allocation as a optimistic path (and fall back to doing smaller
>> orders
>> when a big-order allocation fails). To make that *purely* just an
>> optimization, you need to let the user then treat that order-4
>> allocation as individual pages, and free them one by one etc.
>>
>> So I'm not sure anybody actually does that, but the buddy allocator
>> was partly designed for that case.
>
> That makes sense. With that in mind,
> it would probably be better to just drop
> all of the multi-page bounds checking
> from the usercopy code, not conditionally
> on SLOB.
>
> Alternatively, we could turn the
> __GFP_COMP flag into its negative, and
> set it only on the code paths that do
> what Linus describes (if anyone does
> it).
>
> A WARN_ON_ONCE in the page freeing code
> could catch these cases, and point people
> at exactly what to do if they trigger the
> warning.
>
> I am unclear no how to exclude legitimate
> usercopies that are larger than PAGE_SIZE
> from triggering warnings/errors, if we
> cannot identify every buffer where larger
> copies are legitimately going.
>
> Having people rewrite their usercopy code
> into loops that automatically avoids
> triggering page crossing or >PAGE_SIZE
> checks would be counterproductive, since
> that might just opens up new attack surface.
Yeah, I agree: we want to have centralized bounds checking and if we
offload >PAGE_SIZE copies to the callers, we're asking for a world of
hurt.
One thing I'm expecting to add in the future is a const-sized
copy_*_user API. This will give us a way to make exceptions to
non-whitelisted slab entries, given that the bounds are const at
compile time. It would behave more like get/put_user in that regard,
but could still handle small exceptions to allocations that would have
been otherwise disallowed (in the forthcoming
HARDENED_USERCOPY_WHITELIST series).
If we encounter another case of a multi-page false positive, we can
just entirely drop that check. For now, let's keep this removed for
SLOB only, and move forward.
-Kees
--
Kees Cook
Nexus Security
Powered by blists - more mailing lists