[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1904231533190.9956@nanos.tec.linutronix.de>
Date: Tue, 23 Apr 2019 15:34:14 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Laurent Dufour <ldufour@...ux.vnet.ibm.com>
cc: Michael Ellerman <mpe@...erman.id.au>,
Dave Hansen <dave.hansen@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>, rguenther@...e.de,
mhocko@...e.com, vbabka@...e.cz, luto@...capital.net,
x86@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
linux-mm@...ck.org, stable@...r.kernel.org
Subject: Re: [PATCH] x86/mpx: fix recursive munmap() corruption
On Tue, 23 Apr 2019, Laurent Dufour wrote:
> Le 20/04/2019 à 12:31, Michael Ellerman a écrit :
> > Thomas Gleixner <tglx@...utronix.de> writes:
> > > Aside of that the powerpc variant looks suspicious:
> > >
> > > static inline void arch_unmap(struct mm_struct *mm,
> > > unsigned long start, unsigned long end)
> > > {
> > > if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
> > > mm->context.vdso_base = 0;
> > > }
> > >
> > > Shouldn't that be:
> > >
> > > if (start >= mm->context.vdso_base && mm->context.vdso_base < end)
> > >
> > > Hmm?
> >
> > Yeah looks pretty suspicious. I'll follow-up with Laurent who wrote it.
> > Thanks for spotting it!
>
> I've to admit that I had to read that code carefully before answering.
>
> There are 2 assumptions here:
> 1. 'start' and 'end' are page aligned (this is guaranteed by __do_munmap().
> 2. the VDSO is 1 page (this is guaranteed by the union vdso_data_store on
> powerpc).
>
> The idea is to handle a munmap() call surrounding the VDSO area:
> | VDSO |
> ^start ^end
>
> This is covered by this test, as the munmap() matching the exact boundaries of
> the VDSO is handled too.
>
> Am I missing something ?
Well if this is the intention, then you missed to add a comment explaining it :)
Thanks,
tglx
Powered by blists - more mailing lists