lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <287fb059-910e-4dc9-b21d-581c97a06323@oracle.com>
Date:   Thu, 21 Sep 2017 10:14:44 -0400
From:   Boris Ostrovsky <boris.ostrovsky@...cle.com>
To:     Juergen Gross <jgross@...e.com>, linux-kernel@...r.kernel.org,
        xen-devel@...ts.xenproject.org
Cc:     kirill.shutemov@...ux.intel.com
Subject: Re: [PATCH] xen: support 52 bit physical addresses in pv guests



On 09/21/2017 04:01 AM, Juergen Gross wrote:
> Physical addresses on processors supporting 5 level paging can be up to
> 52 bits wide. For a Xen pv guest running on such a machine those
> physical addresses have to be supported in order to be able to use any
> memory on the machine even if the guest itself does not support 5 level
> paging.
> 
> So when reading/writing a MFN from/to a pte don't use the kernel's
> PTE_PFN_MASK but a new XEN_PTE_MFN_MASK allowing full 40 bit wide MFNs.

full 52 bits?

> 
> Signed-off-by: Juergen Gross <jgross@...e.com>
> ---
>   arch/x86/include/asm/xen/page.h | 11 ++++++++++-
>   arch/x86/xen/mmu_pv.c           |  4 ++--
>   2 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 07b6531813c4..bcb8b193c8d1 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -26,6 +26,15 @@ typedef struct xpaddr {
>   	phys_addr_t paddr;
>   } xpaddr_t;
>   
> +#ifdef CONFIG_X86_64
> +#define XEN_PHYSICAL_MASK	((1UL << 52) - 1)


SME is not supported for PV guests but for consistency (and in case sme 
bit somehow gets set)
#define XEN_PHYSICAL_MASK	__sme_clr(((1UL << 52) - 1))

But the real question that I have is whether this patch is sufficient. 
We are trying to preserve more bits in mfn but then this mfn is used, 
say, in pte_pfn_to_mfn() to build a pte. Can we be sure that the pte 
won't be stripped of higher bits in native code (again, as an example, 
native_make_pte()) because we are compiled with 5LEVEL?

-boris



> +#else
> +#define XEN_PHYSICAL_MASK	__PHYSICAL_MASK
> +#endif
> +
> +#define XEN_PTE_MFN_MASK	((pteval_t)(((signed long)PAGE_MASK) & \
> +					    XEN_PHYSICAL_MASK))
> +
>   #define XMADDR(x)	((xmaddr_t) { .maddr = (x) })
>   #define XPADDR(x)	((xpaddr_t) { .paddr = (x) })
>   
> @@ -277,7 +286,7 @@ static inline unsigned long bfn_to_local_pfn(unsigned long mfn)
>   
>   static inline unsigned long pte_mfn(pte_t pte)
>   {
> -	return (pte.pte & PTE_PFN_MASK) >> PAGE_SHIFT;
> +	return (pte.pte & XEN_PTE_MFN_MASK) >> PAGE_SHIFT;
>   }
>   
>   static inline pte_t mfn_pte(unsigned long page_nr, pgprot_t pgprot)
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index 509f560bd0c6..958d36d776d9 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -315,7 +315,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
>   static pteval_t pte_mfn_to_pfn(pteval_t val)
>   {
>   	if (val & _PAGE_PRESENT) {
> -		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
> +		unsigned long mfn = (val & XEN_PTE_MFN_MASK) >> PAGE_SHIFT;
>   		unsigned long pfn = mfn_to_pfn(mfn);
>   
>   		pteval_t flags = val & PTE_FLAGS_MASK;
> @@ -1740,7 +1740,7 @@ static unsigned long __init m2p(phys_addr_t maddr)
>   {
>   	phys_addr_t paddr;
>   
> -	maddr &= PTE_PFN_MASK;
> +	maddr &= XEN_PTE_MFN_MASK;
>   	paddr = mfn_to_pfn(maddr >> PAGE_SHIFT) << PAGE_SHIFT;
>   
>   	return paddr;
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ