lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210215152722.229588462@linuxfoundation.org>
Date:   Mon, 15 Feb 2021 16:27:39 +0100
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Luis Machado <luis.machado@...aro.org>,
        Vincenzo Frascino <vincenzo.frascino@....com>
Subject: [PATCH 5.10 086/104] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page

From: Catalin Marinas <catalin.marinas@....com>

commit 68d54ceeec0e5fee4fb8048e6a04c193f32525ca upstream.

The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
-EIO.

A newly created (PROT_MTE) mapping points to the zero page which had its
tags zeroed during cpu_enable_mte(). If there were no prior writes to
this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
page does not have the PG_mte_tagged flag set.

Set PG_mte_tagged on the zero page when its tags are cleared during
boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
!PROT_MTE mappings pointing to the zero page, change the
__access_remote_tags() check to (vm_flags & VM_MTE) instead of
PG_mte_tagged.

Signed-off-by: Catalin Marinas <catalin.marinas@....com>
Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
Cc: <stable@...r.kernel.org> # 5.10.x
Cc: Will Deacon <will@...nel.org>
Reported-by: Luis Machado <luis.machado@...aro.org>
Tested-by: Luis Machado <luis.machado@...aro.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@....com>
Link: https://lore.kernel.org/r/20210210180316.23654-1-catalin.marinas@arm.com
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |    6 +-----
 arch/arm64/kernel/mte.c        |    3 ++-
 2 files changed, 3 insertions(+), 6 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1696,16 +1696,12 @@ static void bti_enable(const struct arm6
 #ifdef CONFIG_ARM64_MTE
 static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
 {
-	static bool cleared_zero_page = false;
-
 	/*
 	 * Clear the tags in the zero page. This needs to be done via the
 	 * linear map which has the Tagged attribute.
 	 */
-	if (!cleared_zero_page) {
-		cleared_zero_page = true;
+	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
 		mte_clear_page_tags(lm_alias(empty_zero_page));
-	}
 }
 #endif /* CONFIG_ARM64_MTE */
 
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -239,11 +239,12 @@ static int __access_remote_tags(struct m
 		 * would cause the existing tags to be cleared if the page
 		 * was never mapped with PROT_MTE.
 		 */
-		if (!test_bit(PG_mte_tagged, &page->flags)) {
+		if (!(vma->vm_flags & VM_MTE)) {
 			ret = -EOPNOTSUPP;
 			put_page(page);
 			break;
 		}
+		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
 
 		/* limit access to the end of the page */
 		offset = offset_in_page(addr);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ