lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 11 Apr 2019 11:33:04 -0700
From:   Kees Cook <>
To:     Eric Biggers <>,
        Dmitry Vyukov <>
Cc:     Geert Uytterhoeven <>,
        Herbert Xu <>,
        linux-security-module <>,
        Linux ARM <>,
        Linux Crypto Mailing List <>,
        Linux Kernel Mailing List <>,
        Laura Abbott <>,
        Rik van Riel <>
Subject: Re: crypto: Kernel memory overwrite attempt detected to spans
 multiple pages

On Thu, Apr 11, 2019 at 10:58 AM Eric Biggers <> wrote:
> On Wed, Apr 10, 2019 at 04:27:28PM -0700, Kees Cook wrote:
> > On Wed, Apr 10, 2019 at 4:12 PM Eric Biggers <> wrote:
> > > You've explained *what* it does again, but not *why*.  *Why* do you want
> > > hardened usercopy to detect copies across page boundaries, when there is no
> > > actual buffer overflow?
> >
> > But that *is* how it determines it was a buffer overflow: "if you
> > cross page boundaries (of a non-compound allocation), it *is* a buffer
> > overflow". This assertion, however, is flawed because many contiguous
> > allocations are not marked as being grouped together when it reality
> > they were. It was an attempt to get allocation size information out of
> > the page allocator, similar to how slab can be queries about
> > allocation size. I'm open to improvements here, since it's obviously
> > broken in its current state. :)
> >
> > --
> > Kees Cook
> Well, I'm still at a loss as to whether I'm actually supposed to "fix" this by
> adding __GFP_COMP, or whether you're saying the option is broken anyway so I

I would love it if you could fix it, yes.

> shouldn't bother doing anything.  IIUC, even the kernel stack is still not
> marked __GFP_COMP, so copies to/from the stack can trigger this too, despite
> this being reported over 2 years ago
> (
> CONFIG_HARDENED_USERCOPY_PAGESPAN is even disabled in syzbot because you already
> said the option is broken and should not be used.

stacks are checked before PAGESPAN, so that particular problem should
no longer be present since commit 7bff3c069973 ("mm/usercopy.c: no
check page span for stack objects").

> I worry that people will enable all the hardened usercopy options "because
> security", then when the pagespan check breaks something they will disable all
> hardened usercopy options, because they don't understand the individual options.
> Providing broken options is actively harmful, IMO.

It's behind EXPERT, default-n, and says:

          When a multi-page allocation is done without __GFP_COMP,
          hardened usercopy will reject attempts to copy it. There are,
          however, several cases of this in the kernel that have not all
          been removed. This config is intended to be used only while
          trying to find such users.

I'd rather leave it since it's still useful.

Perhaps it could be switched to WARN by default and we reenable it in
syzbot to improve its utility there?

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 14faadcedd06..6e7e28fe062b 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -208,8 +208,13 @@ static inline void check_page_span(const void
*ptr, unsigned long n,
        is_reserved = PageReserved(page);
        is_cma = is_migrate_cma_page(page);
-       if (!is_reserved && !is_cma)
-               usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
+       if (!is_reserved && !is_cma) {
+               usercopy_warn("spans multiple pages without __GFP_COMP",
+                               NULL, to_user,
+                               (unsigned long)ptr & (unsigned long)PAGE_MASK,
+                               n);
+               return;
+       }

        for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
                page = virt_to_head_page(ptr);

Kees Cook

Powered by blists - more mailing lists