lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6718ede2-1fcb-1a8f-a116-250eef6416c7@linux.vnet.ibm.com>
Date:   Tue, 23 Apr 2019 13:16:42 +0200
From:   Laurent Dufour <ldufour@...ux.vnet.ibm.com>
To:     Michael Ellerman <mpe@...erman.id.au>,
        Thomas Gleixner <tglx@...utronix.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, rguenther@...e.de,
        mhocko@...e.com, vbabka@...e.cz, luto@...capital.net,
        x86@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm@...ck.org, stable@...r.kernel.org
Subject: Re: [PATCH] x86/mpx: fix recursive munmap() corruption

Le 20/04/2019 à 12:31, Michael Ellerman a écrit :
> Thomas Gleixner <tglx@...utronix.de> writes:
>> On Mon, 1 Apr 2019, Dave Hansen wrote:
>>> diff -puN mm/mmap.c~mpx-rss-pass-no-vma mm/mmap.c
>>> --- a/mm/mmap.c~mpx-rss-pass-no-vma	2019-04-01 06:56:53.409411123 -0700
>>> +++ b/mm/mmap.c	2019-04-01 06:56:53.423411123 -0700
>>> @@ -2731,9 +2731,17 @@ int __do_munmap(struct mm_struct *mm, un
>>>   		return -EINVAL;
>>>   
>>>   	len = PAGE_ALIGN(len);
>>> +	end = start + len;
>>>   	if (len == 0)
>>>   		return -EINVAL;
>>>   
>>> +	/*
>>> +	 * arch_unmap() might do unmaps itself.  It must be called
>>> +	 * and finish any rbtree manipulation before this code
>>> +	 * runs and also starts to manipulate the rbtree.
>>> +	 */
>>> +	arch_unmap(mm, start, end);
>>
>> ...
>>    
>>> -static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
>>> -			      unsigned long start, unsigned long end)
>>> +static inline void arch_unmap(struct mm_struct *mm, unsigned long start,
>>> +			      unsigned long end)
>>
>> While you fixed up the asm-generic thing, this breaks arch/um and
>> arch/unicorn32. For those the fixup is trivial by removing the vma
>> argument.
>>
>> But itt also breaks powerpc and there I'm not sure whether moving
>> arch_unmap() to the beginning of __do_munmap() is safe. Micheal???
> 
> I don't know for sure but I think it should be fine. That code is just
> there to handle CRIU unmapping/remapping the VDSO. So that either needs
> to happen while the process is stopped or it needs to handle races
> anyway, so I don't see how the placement within the unmap path should
> matter.

My only concern is the error path.
Calling arch_unmap() before handling any error case means that it will 
have to be undo and there is no way to do so.

I don't know what is the rational to move arch_unmap() to the beginning 
of __do_munmap() but the error paths must be managed.

>> Aside of that the powerpc variant looks suspicious:
>>
>> static inline void arch_unmap(struct mm_struct *mm,
>>                                unsigned long start, unsigned long end)
>> {
>>   	if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
>>                  mm->context.vdso_base = 0;
>> }
>>
>> Shouldn't that be:
>>
>>   	if (start >= mm->context.vdso_base && mm->context.vdso_base < end)
>>
>> Hmm?
> 
> Yeah looks pretty suspicious. I'll follow-up with Laurent who wrote it.
> Thanks for spotting it!

I've to admit that I had to read that code carefully before answering.

There are 2 assumptions here:
  1. 'start' and 'end' are page aligned (this is guaranteed by 
__do_munmap().
  2. the VDSO is 1 page (this is guaranteed by the union vdso_data_store 
on powerpc).

The idea is to handle a munmap() call surrounding the VDSO area:
       | VDSO |
  ^start         ^end

This is covered by this test, as the munmap() matching the exact 
boundaries of the VDSO is handled too.

Am I missing something ?

Cheers,
Laurent.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ