lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Oct 2014 11:53:37 +0000
From:	Andrew Cooper <andrew.cooper3@...rix.com>
To:	Ian Campbell <Ian.Campbell@...rix.com>,
	Juergen Gross <jgross@...e.com>
CC:	<boris.ostrovsky@...cle.com>, <xen-devel@...ts.xensource.com>,
	David Vrabel <david.vrabel@...rix.com>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [Xen-devel] [PATCH 0/2] xen: Switch to virtual mapped linear
 p2m list

On 28/10/14 09:51, Ian Campbell wrote:
> On Tue, 2014-10-28 at 06:00 +0100, Juergen Gross wrote:
>> On 10/27/2014 04:16 PM, David Vrabel wrote:
>>> On 27/10/14 14:52, Juergen Gross wrote:
>>>> Paravirtualized kernels running on Xen use a three level tree for
>>>> translation of guest specific physical addresses to machine global
>>>> addresses. This p2m tree is used for construction of page table
>>>> entries, so the p2m tree walk is performance critical.
>>>>
>>>> By using a linear virtual mapped p2m list accesses to p2m elements
>>>> can be sped up while even simplifying code. To achieve this goal
>>>> some p2m related initializations have to be performed later in the
>>>> boot process, as the final p2m list can be set up only after basic
>>>> memory management functions are available.
>>> What impact does this have on 32-bit guests which don't have huge amount
>>> of virtual address space?
>>>
>>> I think a 32-bit guest could have up to 64 GiB of PFNs, which would
>>> require a 128 MiB p2m array, which is too large?
>> It is 64 MB (one entry on 32 bit is 32 bits :-) ).
>>
>> With a m2p array of only 16 MB size I doubt a 32 bit guest can be larger
>> than 16 GB, or am I wrong here?
> I think they can be bigger, the compat r/o m2p is 168MB, since Xen
> doesn't need to be in the hole as well (like it was with a real 32-bit
> Xen). There is also some scope for dynamic sizing of the hole (queried
> via XENMEM_machphys_mapping), I'm not sure if pvops makes use of that
> though.
>
> In practice a 32-bit kernel starts to get pretty unhappy somewhere
> between 32 and 64GB because it runs out of low memory for various
> structures which are sized according to the amount of RAM. Or it did,
> it's been years since I've tried, maybe things are more able to use
> highmem now. In any case if you have such large amounts of RAM using a
> 64-bit kernel would be advisable.

It is XenServers experience that something (I presume a
power-of-two-aligned mapping) cuts off at 128MB of the compat m2p,
allowing for 32bit guest pages to exist in the first 128GB of host RAM.

Technically speaking, there is nothing preventing a 32bit PV guest being
that large.  The traditional issues with 32bit PAE kernels do not apply
as Xen is running in long mode, but kernel lowmem will be the limiting
factor.  Switching to a 2/2 split will help, but it is a loosing battle.

It should be noted that OEL formally supports 64GB 32bit PV VMs, and as
a result, the XenServer VM lifecycle tests do test it, and it works.

~Andrew

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ