lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 12 Jul 2020 22:53:54 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Naresh Kamboju <naresh.kamboju@...aro.org>,
        linux- stable <stable@...r.kernel.org>,
        open list <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>, Arnd Bergmann <arnd@...db.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...nel.org>,
        lkft-triage@...ts.linaro.org, Chris Down <chris@...isdown.name>,
        Michel Lespinasse <walken@...gle.com>,
        Fan Yang <Fan_Yang@...u.edu.cn>,
        Brian Geffon <bgeffon@...gle.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Will Deacon <will@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>, pugaowei@...il.com,
        Jerome Glisse <jglisse@...hat.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Hugh Dickins <hughd@...gle.com>,
        Al Viro <viro@...iv.linux.org.uk>, Tejun Heo <tj@...nel.org>,
        Sasha Levin <sashal@...nel.org>
Subject: Re: WARNING: at mm/mremap.c:211 move_page_tables in i386

Hi Linus,

On Sun, Jul 12, 2020 at 03:58:06PM -0700, Linus Torvalds wrote:
> On Sun, Jul 12, 2020 at 2:50 PM Joel Fernandes <joel@...lfernandes.org> wrote:
> >
> > I reproduced Naresh's issue on a 32-bit x86 machine and the below patch fixes it.
> > The issue is solely within execve() itself and the way it allocates/copies the
> > temporary stack.
> >
> > It is actually indeed an overlapping case because the length of the
> > stack is big enough to cause overlap. The VMA grows quite a bit because of
> > all the page faults that happen due to the copy of the args/env. Then during
> > the move of overlapped region, it finds that a PMD is already allocated.
> 
> Oh, ok, I think I see.
> 
> So the issue is that while move_normal_pmd() _itself_ will be clearing
> the old pmd entries when it copies them, the 4kB copies that happened
> _before_ this time will not have cleared the old pmd that they were
> in.
> 
> So we've had a deeper stack, and we've already copied some of it one
> page at a time, and we're done with those 4kB entries, but now we hit
> a 2MB-aligned case and want to move that down. But when it does that,
> it hits the (by now hopefully empty) pmd that used to contain the 4kB
> entries we copied earlier.
> 
> So we've cleared the page table, but we've simply never called
> pgtable_clear() here, because the page table got cleared in
> move_ptes() when it did that
> 
>                 pte = ptep_get_and_clear(mm, old_addr, old_pte);
> 
> on the previous old pte entries.
> 
> That makes sense to me. Does that match what you see? Because when I
> said it wasn't overlapped, I was really talking about that one single
> pmd move itself not being overlapped.

This matches exactly what you mentioned.

Here is some analysis with specific numbers:

Some time during execve(), the copy_strings() causes page faults. During this
the VMA is growing and new PMD is allocated during the page fault:

copy_strings: Copying args/env old process's memory 8067420 to new stack bfb0eec6 (len 4096)
handle_mm_fault: vma: bfb0d000 c0000000
handle_mm_fault: pmd_alloc: ad: bfb0dec6 ptr: f46d0bf8

Here we see that the the pmd entry's (pmd_t *) pointer value is f46d0bf8 and
the fault was on address bfb0dec6.

Similarly, I note the pattern of other copy_strings() faulting and do the
following mapping:

address space	 	pmd entry pointer
0xbfe00000-0xc0000000	f4698ff8
0xbfc00000-0xbfe00000	f4698ff0
0xbfa00000-0xbfc00000	f4698fe8

This is all from tracing the copy_strings().

Then later, the kernel decides to move the VMA down. The VMA total size 5MB.
The stack is initially at a 1MB aligned address: 0xbfb00000 . exec requests
move_page_tables() to move it down by 4MB. That is, to 0xbf700000 which is
also 1MB aligned.  This is an overlapping move.

move_page_tables starts, I see the following prints before the warning fires.

The plan of attack should be, first copy 1MB using move_ptes() like you said.
Then it hits 2MB aligned boundary and starts move_normal_pmd(). For both
move_ptes() and move_normal_pmd(), a pmd_alloc() first happens which is
printed below:

move_page_tables: move_page_tables old=bfb00000 (len:5242880) new=bf700000
move_page_tables: pmd_alloc: ad: bf700000 ptr: f4698fd8
move_page_tables: move_ptes old=bfb00000->bfc00000 new=bf700000
move_page_tables: pmd_alloc: ad: bf800000 ptr: f4698fe0
move_page_tables: move_normal_pmd: old: bfc00000-c0000000 new: bf800000 (val: 0)
move_page_tables: pmd_alloc: ad: bfa00000 ptr: f4698fe8
move_page_tables: move_normal_pmd: old: bfe00000-c0000000 new: bfa00000 (val: 34164067)

So basically, it did 1MB worth of move_ptes(), and twice 2MB worth of
move_normal_pmd.  Since the shift was only 4MB, it hit an already allocated
PMD during the second move_normal_pmd. The warning fires as that
move_normal_pmd() sees 0xbf800000 is already allocated before.

As you said, this is harmless.

One thing I observed by code reading is move_page_tables() is (in the case of
mremap)  only called for non-overlapping moves (through mremap_to() or
move_vma() as the case maybe). It makes me nervous we are doing an
overlapping move and causing some bug inadvertently.

Could you let me know if there is any reason why we should not make the new
stack's location as non-overlapping, just to keep things simple? (Assuming
we fix the issues related to overriding you mentioned below).

> > The below patch fixes it and is not warning anymore in 30 minutes of testing
> > so far.
> 
> So I'm not hugely happy with the patch, I have to admit.
> 
> Why?
> 
> Because:
> 
> > +       /* Ensure the temporary stack is shifted by atleast its size */
> > +       if (stack_shift < (vma->vm_end - vma->vm_start))
> > +               stack_shift = (vma->vm_end - vma->vm_start);
> 
> This basically overrides the random stack shift done by arch_align_stack().
> 
> Yeah, yeah, arch_align_stack() is kind of misnamed. It _used_ to do
> what the name implies it does, which on x86 is to just give the
> minimum ABI-specified 16-byte stack alignment.
> But these days, what it really does is say "align the stack pointer,
> but also shift it down by a random amount within 8kB so that the start
> of the stack isn't some repeatable thing that an attacker can
> trivially control with the right argv/envp size.."

Got it, thanks I will work on improving the patch along these lines.

> I don't think it works right anyway because we then PAGE_ALIGN() it,
> but whatever.

:)

> But it looks like what you're doing means that now the size of the
> stack will override that shift, and that means that now the
> randomization is entirely undone. No?

Yes, true. I will work on fixing it.

> Plus you do it after we've also checked that we haven't grown the
> stack down to below mmap_min_addr().
> 
> But maybe I misread that patch.
> 
> But I do feel like you figured out why the bug happened, now we're
> just discussing whether the patch is the right thing to do.

Yes.

> Maybe saying "doing the pmd copies for the initial stack isn't
> important, so let's just note this as a special case and get rid of
> the WARN_ON()" might be an alternative solution.

Personally, I feel it is better to keep the warning just so in the future we
can detect any bugs.

thanks,

 - Joel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ