[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1471543363.2581.30.camel@redhat.com>
Date: Thu, 18 Aug 2016 14:02:43 -0400
From: Rik van Riel <riel@...hat.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Kees Cook <keescook@...omium.org>,
Laura Abbott <labbott@...oraproject.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
kernel test robot <xiaolong.ye@...el.com>
Subject: Re: [PATCH] usercopy: Skip multi-page bounds checking on SLOB
On Thu, 2016-08-18 at 10:42 -0700, Linus Torvalds wrote:
> On Thu, Aug 18, 2016 at 7:21 AM, Rik van Riel <riel@...hat.com>
> wrote:
> >
> > One big question I have for Linus is, do we want
> > to allow code that does a higher order allocation,
> > and then frees part of it in smaller orders, or
> > individual pages, and keeps using the remainder?
>
> Yes. We've even had people do that, afaik. IOW, if you know you're
> going to allocate 16 pages, you can try to do an order-4 allocation
> and just use the 16 pages directly (but still as individual pages),
> and avoid extra allocation costs (and to perhaps get better access
> patterns if the allocation succeeds etc etc).
>
> That sounds odd, but it actually makes sense when you have the order-
> 4
> allocation as a optimistic path (and fall back to doing smaller
> orders
> when a big-order allocation fails). To make that *purely* just an
> optimization, you need to let the user then treat that order-4
> allocation as individual pages, and free them one by one etc.
>
> So I'm not sure anybody actually does that, but the buddy allocator
> was partly designed for that case.
That makes sense. With that in mind,
it would probably be better to just drop
all of the multi-page bounds checking
from the usercopy code, not conditionally
on SLOB.
Alternatively, we could turn the
__GFP_COMP flag into its negative, and
set it only on the code paths that do
what Linus describes (if anyone does
it).
A WARN_ON_ONCE in the page freeing code
could catch these cases, and point people
at exactly what to do if they trigger the
warning.
I am unclear no how to exclude legitimate
usercopies that are larger than PAGE_SIZE
from triggering warnings/errors, if we
cannot identify every buffer where larger
copies are legitimately going.
Having people rewrite their usercopy code
into loops that automatically avoids
triggering page crossing or >PAGE_SIZE
checks would be counterproductive, since
that might just opens up new attack surface.
Powered by blists - more mailing lists