[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1904191248090.3174@nanos.tec.linutronix.de>
Date: Fri, 19 Apr 2019 12:55:36 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Dave Hansen <dave.hansen@...ux.intel.com>
cc: LKML <linux-kernel@...r.kernel.org>, rguenther@...e.de,
mhocko@...e.com, vbabka@...e.cz, luto@...capital.net,
x86@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
linux-mm@...ck.org, stable@...r.kernel.org,
Michael Ellerman <mpe@...erman.id.au>
Subject: Re: [PATCH] x86/mpx: fix recursive munmap() corruption
On Mon, 1 Apr 2019, Dave Hansen wrote:
> diff -puN mm/mmap.c~mpx-rss-pass-no-vma mm/mmap.c
> --- a/mm/mmap.c~mpx-rss-pass-no-vma 2019-04-01 06:56:53.409411123 -0700
> +++ b/mm/mmap.c 2019-04-01 06:56:53.423411123 -0700
> @@ -2731,9 +2731,17 @@ int __do_munmap(struct mm_struct *mm, un
> return -EINVAL;
>
> len = PAGE_ALIGN(len);
> + end = start + len;
> if (len == 0)
> return -EINVAL;
>
> + /*
> + * arch_unmap() might do unmaps itself. It must be called
> + * and finish any rbtree manipulation before this code
> + * runs and also starts to manipulate the rbtree.
> + */
> + arch_unmap(mm, start, end);
...
> -static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
> - unsigned long start, unsigned long end)
> +static inline void arch_unmap(struct mm_struct *mm, unsigned long start,
> + unsigned long end)
While you fixed up the asm-generic thing, this breaks arch/um and
arch/unicorn32. For those the fixup is trivial by removing the vma
argument.
But itt also breaks powerpc and there I'm not sure whether moving
arch_unmap() to the beginning of __do_munmap() is safe. Micheal???
Aside of that the powerpc variant looks suspicious:
static inline void arch_unmap(struct mm_struct *mm,
unsigned long start, unsigned long end)
{
if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
mm->context.vdso_base = 0;
}
Shouldn't that be:
if (start >= mm->context.vdso_base && mm->context.vdso_base < end)
Hmm?
Thanks,
tglx
Powered by blists - more mailing lists