lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db2c757c-2434-0f14-762a-b0e56819bb87@suse.com>
Date:   Thu, 21 Sep 2017 16:41:40 +0200
From:   Juergen Gross <jgross@...e.com>
To:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org
Cc:     kirill.shutemov@...ux.intel.com
Subject: Re: [PATCH] xen: support 52 bit physical addresses in pv guests

On 21/09/17 16:14, Boris Ostrovsky wrote:
> 
> 
> On 09/21/2017 04:01 AM, Juergen Gross wrote:
>> Physical addresses on processors supporting 5 level paging can be up to
>> 52 bits wide. For a Xen pv guest running on such a machine those
>> physical addresses have to be supported in order to be able to use any
>> memory on the machine even if the guest itself does not support 5 level
>> paging.
>>
>> So when reading/writing a MFN from/to a pte don't use the kernel's
>> PTE_PFN_MASK but a new XEN_PTE_MFN_MASK allowing full 40 bit wide MFNs.
> 
> full 52 bits?

The MFN mask is only 40 bits. This plus the 12 bits page offset are 52
bits of machine address width.

> 
>>
>> Signed-off-by: Juergen Gross <jgross@...e.com>
>> ---
>>   arch/x86/include/asm/xen/page.h | 11 ++++++++++-
>>   arch/x86/xen/mmu_pv.c           |  4 ++--
>>   2 files changed, 12 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/xen/page.h
>> b/arch/x86/include/asm/xen/page.h
>> index 07b6531813c4..bcb8b193c8d1 100644
>> --- a/arch/x86/include/asm/xen/page.h
>> +++ b/arch/x86/include/asm/xen/page.h
>> @@ -26,6 +26,15 @@ typedef struct xpaddr {
>>       phys_addr_t paddr;
>>   } xpaddr_t;
>>   +#ifdef CONFIG_X86_64
>> +#define XEN_PHYSICAL_MASK    ((1UL << 52) - 1)
> 
> 
> SME is not supported for PV guests but for consistency (and in case sme
> bit somehow gets set)
> #define XEN_PHYSICAL_MASK    __sme_clr(((1UL << 52) - 1))

Hmm, really? Shouldn't we rather add something like

BUG_ON(sme_active());

somewhere?

> But the real question that I have is whether this patch is sufficient.
> We are trying to preserve more bits in mfn but then this mfn is used,
> say, in pte_pfn_to_mfn() to build a pte. Can we be sure that the pte
> won't be stripped of higher bits in native code (again, as an example,
> native_make_pte()) because we are compiled with 5LEVEL?

native_make_pte() just encapsulates pte_t. It doesn't modify the value
of the pte at all.

Physical address bits are only ever masked away via PTE_PFN_MASK and I
haven't found any place where it is used for a MFN other than those I
touched in this patch.


Juergen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ