lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 18 Aug 2016 10:42:09 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Rik van Riel <riel@...hat.com>
Cc:     Kees Cook <keescook@...omium.org>,
        Laura Abbott <labbott@...oraproject.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>,
        kernel test robot <xiaolong.ye@...el.com>
Subject: Re: [PATCH] usercopy: Skip multi-page bounds checking on SLOB

On Thu, Aug 18, 2016 at 7:21 AM, Rik van Riel <riel@...hat.com> wrote:
>
> One big question I have for Linus is, do we want
> to allow code that does a higher order allocation,
> and then frees part of it in smaller orders, or
> individual pages, and keeps using the remainder?

Yes. We've even had people do that, afaik. IOW, if you know you're
going to allocate 16 pages, you can try to do an order-4 allocation
and just use the 16 pages directly (but still as individual pages),
and avoid extra allocation costs (and to perhaps get better access
patterns if the allocation succeeds etc etc).

That sounds odd, but it actually makes sense when you have the order-4
allocation as a optimistic path (and fall back to doing smaller orders
when a big-order allocation fails). To make that *purely* just an
optimization, you need to let the user then treat that order-4
allocation as individual pages, and free them one by one etc.

So I'm not sure anybody actually does that, but the buddy allocator
was partly designed for that case.

> From both a hardening and a simple stability
> point of view, allowing memory to be allocated
> in one size, and freed in another, seems like
> it could be asking for bugs.

Quite frankly, I'd much rather instead make a hard rule that "user
copies can never be more than one page".

There are *very* few things that actually have bigger user copies, and
we could make those explicitly loop, or mark them as such.

The single-page size limit is fairly natural because of how both our
page cache and our pathname limiting is limited to single pages.

The only thing that generally isn't a single page tends to be:

 - module loading code with vmalloc destination

   We *already* chunk that for other reasons, although we ended up
making the chunk size be 16 pages. Making it a single page wouldn't
really hurt anything.

 - we probably have networking cases that might have big socket buffer
allocations etc.

 - there could be some very strange driver, but we'd find them fairly
quickly if we just start out with making th ecopy_from/to_user()
callers just unconditionally have a

     WARN_ON_ONCE((len) > PAGE_SIZE);

  and not making it fatal, but making it easy to find.

Anyway, I think *that* would be a much easier rule to start with than
worrying about page crossing.

Yes, page crossing can be nasty, and maybe we can try to aim for that
in the future (and mark things that the FPU saving special, because it
really is very very unusual), but I'd actually prefer the 4kB rule
first because that would also allow us to just get rid of the odd
vmalloc special cases etc in the checking.

            Linus

Powered by blists - more mailing lists