lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Apr 2012 09:58:44 +0100
From:	David Vrabel <david.vrabel@...rix.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC:	<linux-kernel@...r.kernel.org>, <xen-devel@...ts.xensource.com>,
	Jan Beulich <JBeulich@...e.com>
Subject: Re: [Xen-devel] [PATCH 6/7] xen/setup: Make dom0_mem=XGB behavior
 be similar to classic Xen kernels.

On 30/03/12 21:37, Konrad Rzeszutek Wilk wrote:
> Meaning that we will allocate up to XGB and not consider the
> rest of the memory as a possible balloon goal.

I agree with Jan when he commented on the equivalent Xen patch for this
behaviour.  The current behaviour is better than the classic one.

With your new behaviour it will no longer possible to specify an
unlimited balloon but a limited number of initial pages.  This is
behaviour that Jan said he used.

This problem is better solved by improving the documentation.  A review
of the xen.org wiki where dom0_mem is mentioned would be a good start,
and an update to the recently added section for distro developers.

David

> This results in /proc/meminfo reporting:
> 
> -MemTotal:        2845024 kB
> -MemFree:         2497716 kB
> +MemTotal:        2927192 kB
> +MemFree:         2458952 kB
>  ...
> -DirectMap4k:     8304640 kB
> +DirectMap4k:     3063808 kB
>  DirectMap2M:           0 kB
> 
> on a 8GB machine with 'dom0_mem=3GB' on the Xen hypervisor line.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
> ---
>  arch/x86/xen/setup.c |   16 ++++++++++++++++
>  1 files changed, 16 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 2a12143..4e4aa8e 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -261,11 +261,27 @@ static unsigned long __init xen_get_max_pages(void)
>  	 * the current maximum rather than the static maximum. In this
>  	 * case the e820 map provided to us will cover the static
>  	 * maximum region.
> +	 *
> +	 * The dom0_mem=min:X,max:Y tweaks options differently depending
> +	 * on the version, but in general this is what we get:
> +	 *                | XENMEM_maximum_reser  | nr_pages
> +	 * --------------++-----------------------+-------------------
> +	 *  no dom0_mem   | INT_MAX               | the max_phys_pfn
> +	 *  =3G           | INT_MAX               | 786432
> +	 *  =max:3G       | 786432                | 786432
> +	 *  =min:1G,max:3G| 262144                | 786432
> +	 *
> +	 * The =3G is often used and it lead to us initially setting
> +	 * 786432 and allowing dom0 to balloon up to the max_physical_pfn.
> +	 * This is at odd with the classic XenOClassic so lets emulate
> +	 * the classic behavior.
>  	 */
>  	if (xen_initial_domain()) {
>  		ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
>  		if (ret > 0)
>  			max_pages = ret;
> +		if (ret == -1UL)
> +			max_pages = xen_start_info->nr_pages;
>  	}
>  
>  	return min(max_pages, MAX_DOMAIN_PAGES);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists