[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200116231710.926122766@linuxfoundation.org>
Date: Fri, 17 Jan 2020 00:18:08 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Kees Cook <keescook@...omium.org>,
Peter Robinson <pbrobinson@...il.com>,
Laura Abbott <labbott@...hat.com>,
Will Deacon <will.deacon@....com>,
Ben Hutchings <ben.hutchings@...ethink.co.uk>
Subject: [PATCH 4.14 10/71] arm64: Make sure permission updates happen for pmd/pud
From: Laura Abbott <labbott@...hat.com>
commit 82034c23fcbc2389c73d97737f61fa2dd6526413 upstream.
Commit 15122ee2c515 ("arm64: Enforce BBM for huge IO/VMAP mappings")
disallowed block mappings for ioremap since that code does not honor
break-before-make. The same APIs are also used for permission updating
though and the extra checks prevent the permission updates from happening,
even though this should be permitted. This results in read-only permissions
not being fully applied. Visibly, this can occasionaly be seen as a failure
on the built in rodata test when the test data ends up in a section or
as an odd RW gap on the page table dump. Fix this by using
pgattr_change_is_safe instead of p*d_present for determining if the
change is permitted.
Reviewed-by: Kees Cook <keescook@...omium.org>
Tested-by: Peter Robinson <pbrobinson@...il.com>
Reported-by: Peter Robinson <pbrobinson@...il.com>
Fixes: 15122ee2c515 ("arm64: Enforce BBM for huge IO/VMAP mappings")
Signed-off-by: Laura Abbott <labbott@...hat.com>
Signed-off-by: Will Deacon <will.deacon@....com>
Signed-off-by: Ben Hutchings <ben.hutchings@...ethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm64/mm/mmu.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -917,13 +917,15 @@ int pud_set_huge(pud_t *pudp, phys_addr_
{
pgprot_t sect_prot = __pgprot(PUD_TYPE_SECT |
pgprot_val(mk_sect_prot(prot)));
+ pud_t new_pud = pfn_pud(__phys_to_pfn(phys), sect_prot);
- /* ioremap_page_range doesn't honour BBM */
- if (pud_present(READ_ONCE(*pudp)))
+ /* Only allow permission changes for now */
+ if (!pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)),
+ pud_val(new_pud)))
return 0;
BUG_ON(phys & ~PUD_MASK);
- set_pud(pudp, pfn_pud(__phys_to_pfn(phys), sect_prot));
+ set_pud(pudp, new_pud);
return 1;
}
@@ -931,13 +933,15 @@ int pmd_set_huge(pmd_t *pmdp, phys_addr_
{
pgprot_t sect_prot = __pgprot(PMD_TYPE_SECT |
pgprot_val(mk_sect_prot(prot)));
+ pmd_t new_pmd = pfn_pmd(__phys_to_pfn(phys), sect_prot);
- /* ioremap_page_range doesn't honour BBM */
- if (pmd_present(READ_ONCE(*pmdp)))
+ /* Only allow permission changes for now */
+ if (!pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)),
+ pmd_val(new_pmd)))
return 0;
BUG_ON(phys & ~PMD_MASK);
- set_pmd(pmdp, pfn_pmd(__phys_to_pfn(phys), sect_prot));
+ set_pmd(pmdp, new_pmd);
return 1;
}
Powered by blists - more mailing lists