lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YvSRyjDsrbB7v2JT@ip-172-31-24-42.ap-northeast-1.compute.internal>
Date:   Thu, 11 Aug 2022 05:21:14 +0000
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Aaron Lu <aaron.lu@...el.com>
Cc:     Dave Hansen <dave.hansen@...el.com>,
        Rick Edgecombe <rick.p.edgecombe@...el.com>,
        Song Liu <song@...nel.org>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [RFC PATCH 1/4] x86/mm/cpa: restore global bit when page is
 present

On Mon, Aug 08, 2022 at 10:56:46PM +0800, Aaron Lu wrote:
> For configs that don't have PTI enabled or cpus that don't need
> meltdown mitigation, current kernel can lose GLOBAL bit after a page
> goes through a cycle of present -> not present -> present.
> 
> It happened like this(__vunmap() does this in vm_remove_mappings()):
> original page protection: 0x8000000000000163 (NX/G/D/A/RW/P)
> set_memory_np(page, 1):   0x8000000000000062 (NX/D/A/RW) lose G and P
> set_memory_p(pagem 1):    0x8000000000000063 (NX/D/A/RW/P) restored P
> 
> In the end, this page's protection no longer has Global bit set and this
> would create problem for this merge small mapping feature.
> 
> For this reason, restore Global bit for systems that do not have PTI
> enabled if page is present.
> 
> (pgprot_clear_protnone_bits() deserves a better name if this patch is
> acceptible but first, I would like to get some feedback if this is the
> right way to solve this so I didn't bother with the name yet)
> 
> Signed-off-by: Aaron Lu <aaron.lu@...el.com>
> ---
>  arch/x86/mm/pat/set_memory.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 1abd5438f126..33657a54670a 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -758,6 +758,8 @@ static pgprot_t pgprot_clear_protnone_bits(pgprot_t prot)
>  	 */
>  	if (!(pgprot_val(prot) & _PAGE_PRESENT))
>  		pgprot_val(prot) &= ~_PAGE_GLOBAL;
> +	else
> +		pgprot_val(prot) |= _PAGE_GLOBAL & __default_kernel_pte_mask;
>  
>  	return prot;
>  }

IIUC It makes it unable to set _PAGE_GLOBL when PTI is on.

Maybe it would be less intrusive to make
set_direct_map_default_noflush() replace protection bits
with PAGE_KENREL as it's only called for direct map, and the function
is to reset permission to default:

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 1abd5438f126..0dd4433c1382 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2250,7 +2250,16 @@ int set_direct_map_invalid_noflush(struct page *page)

 int set_direct_map_default_noflush(struct page *page)
 {
-       return __set_pages_p(page, 1);
+       unsigned long tempaddr = (unsigned long) page_address(page);
+       struct cpa_data cpa = {
+                       .vaddr = &tempaddr,
+                       .pgd = NULL,
+                       .numpages = 1,
+                       .mask_set = PAGE_KERNEL,
+                       .mask_clr = __pgprot(~0),
+                       .flags = 0};
+
+       return __change_page_attr_set_clr(&cpa, 0);
 }

set_direct_map_{invalid,default}_noflush() is the exact reason
why direct map become split after vmalloc/vfree with special
permissions.

> -- 
> 2.37.1
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ