[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <af207cf8-3049-85eb-349d-5fed6b9be49c@oracle.com>
Date: Mon, 12 Oct 2020 11:03:33 -0600
From: Khalid Aziz <khalid.aziz@...cle.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: Jann Horn <jannh@...gle.com>,
"David S. Miller" <davem@...emloft.net>,
sparclinux@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
Anthony Yznaga <anthony.yznaga@...cle.com>,
Will Deacon <will@...nel.org>,
linux-arm-kernel@...ts.infradead.org,
Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH 1/2] mm/mprotect: Call arch_validate_prot under mmap_lock
and with length
On 10/10/20 5:09 AM, Catalin Marinas wrote:
> Hi Khalid,
>
> On Wed, Oct 07, 2020 at 02:14:09PM -0600, Khalid Aziz wrote:
>> On 10/7/20 1:39 AM, Jann Horn wrote:
>>> arch_validate_prot() is a hook that can validate whether a given set of
>>> protection flags is valid in an mprotect() operation. It is given the set
>>> of protection flags and the address being modified.
>>>
>>> However, the address being modified can currently not actually be used in
>>> a meaningful way because:
>>>
>>> 1. Only the address is given, but not the length, and the operation can
>>> span multiple VMAs. Therefore, the callee can't actually tell which
>>> virtual address range, or which VMAs, are being targeted.
>>> 2. The mmap_lock is not held, meaning that if the callee were to check
>>> the VMA at @addr, that VMA would be unrelated to the one the
>>> operation is performed on.
>>>
>>> Currently, custom arch_validate_prot() handlers are defined by
>>> arm64, powerpc and sparc.
>>> arm64 and powerpc don't care about the address range, they just check the
>>> flags against CPU support masks.
>>> sparc's arch_validate_prot() attempts to look at the VMA, but doesn't take
>>> the mmap_lock.
>>>
>>> Change the function signature to also take a length, and move the
>>> arch_validate_prot() call in mm/mprotect.c down into the locked region.
> [...]
>> As Chris pointed out, the call to arch_validate_prot() from do_mmap2()
>> is made without holding mmap_lock. Lock is not acquired until
>> vm_mmap_pgoff(). This variance is uncomfortable but I am more
>> uncomfortable forcing all implementations of validate_prot to require
>> mmap_lock be held when non-sparc implementations do not have such need
>> yet. Since do_mmap2() is in powerpc specific code, for now this patch
>> solves a current problem.
>
> I still think sparc should avoid walking the vmas in
> arch_validate_prot(). The core code already has the vmas, though not
> when calling arch_validate_prot(). That's one of the reasons I added
> arch_validate_flags() with the MTE patches. For sparc, this could be
> (untested, just copied the arch_validate_prot() code):
I am little uncomfortable with the idea of validating protection bits
inside the VMA walk loop in do_mprotect_pkey(). When ADI is being
enabled across multiple VMAs and arch_validate_flags() fails on a VMA
later, do_mprotect_pkey() will bail out with error leaving ADI enabled
on earlier VMAs. This will apply to protection bits other than ADI as
well of course. This becomes a partial failure of mprotect() call. I
think it should be all or nothing with mprotect() - when one calls
mprotect() from userspace, either the entire address range passed in
gets its protection bits updated or none of it does. That requires
validating protection bits upfront or undoing what earlier iterations of
VMA walk loop might have done.
--
Khalid
>
> static inline bool arch_validate_flags(unsigned long vm_flags)
> {
> if (!(vm_flags & VM_SPARC_ADI))
> return true;
>
> if (!adi_capable())
> return false;
>
> /* ADI can not be enabled on PFN mapped pages */
> if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
> return false;
>
> /*
> * Mergeable pages can become unmergeable if ADI is enabled on
> * them even if they have identical data on them. This can be
> * because ADI enabled pages with identical data may still not
> * have identical ADI tags on them. Disallow ADI on mergeable
> * pages.
> */
> if (vma->vm_flags & VM_MERGEABLE)
> return false;
>
> return true;
> }
>
>> That leaves open the question of should
>> generic mmap call arch_validate_prot and return EINVAL for invalid
>> combination of protection bits, but that is better addressed in a
>> separate patch.
>
> The above would cover mmap() as well.
>
> The current sparc_validate_prot() relies on finding the vma for the
> corresponding address. However, if you call this early in the mmap()
> path, there's no such vma. It is only created later in mmap_region()
> which no longer has the original PROT_* flags (all converted to VM_*
> flags).
>
> Calling arch_validate_flags() on mmap() has a small side-effect on the
> user ABI: if the CPU is not adi_capable(), PROT_ADI is currently ignored
> on mmap() but rejected by sparc_validate_prot(). Powerpc already does
> this already and I think it should be fine for arm64 (it needs checking
> though as we have another flag, PROT_BTI, hopefully dynamic loaders
> don't pass this flag unconditionally).
>
> However, as I said above, it doesn't solve the mmap() PROT_ADI checking
> for sparc since there's no vma yet. I'd strongly recommend the
> arch_validate_flags() approach and reverting the "start" parameter added
> to arch_validate_prot() if you go for the flags route.
>
> Thanks.
>
Powered by blists - more mailing lists