lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 Mar 2013 14:27:06 -0500
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Daniel Kiper <daniel.kiper@...cle.com>, david.vrabel@...rix.com
Cc:	carsten@...iers.de, darren.s.shepherd@...il.com,
	david.vrabel@...rix.com, james-xen@...gwall.me.uk,
	linux-kernel@...r.kernel.org, xen-devel@...ts.xensource.com
Subject: Re: [PATCH 1/1] xen/balloon: Enforce various limits on target

On Mon, Mar 04, 2013 at 10:14:51PM +0100, Daniel Kiper wrote:
> This patch enforces on target limit statically defined in Linux Kernel
> source and limit defined by hypervisor or host.
> 
> Particularly this patch fixes bug which led to flood
> of dom0 kernel log with messages similar to:
> 
> System RAM resource [mem 0x1b8000000-0x1bfffffff] cannot be added
> xen_balloon: reserve_additional_memory: add_memory() failed: -17
> 
> It does not allow balloon driver to execute infinite
> loops when target exceeds limits in other cases too.
> 
> Signed-off-by: Daniel Kiper <daniel.kiper@...cle.com>

One nitpick below; David, could you take a look just for a extra
set of eyes.
> ---
>  drivers/xen/balloon.c |   47 ++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 46 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index a56776d..07da753 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -65,6 +65,7 @@
>  #include <xen/balloon.h>
>  #include <xen/features.h>
>  #include <xen/page.h>
> +#include <xen/xenbus.h>
>  
>  /*
>   * balloon_process() state:
> @@ -490,11 +491,55 @@ static void balloon_process(struct work_struct *work)
>  	mutex_unlock(&balloon_mutex);
>  }
>  
> -/* Resets the Xen limit, sets new target, and kicks off processing. */
> +/* Enforce limits, set new target and kick off processing. */
>  void balloon_set_new_target(unsigned long target)
>  {
> +	domid_t domid = DOMID_SELF;
> +	int rc;
> +	unsigned long long host_limit;
> +
> +	/* Enforce statically defined limit. */
> +	target = min(target, MAX_DOMAIN_PAGES);
> +
> +	if (xen_initial_domain()) {
> +		rc = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> +
> +		/* Limit is not enforced by hypervisor. */
> +		if (rc == -EPERM)
> +			goto no_host_limit;
> +
> +		if (rc <= 0) {
> +			pr_info("xen_balloon: %s: Initial domain target limit "
> +				"could not be established: %i\n", __func__, rc);

Probably pr_debug. As this means the user booted without dom0_mem_max argument.
(B/c rc == UINT_MAX) I would say if you can check for that and it as part of
if (rc == -EPERM) check.


> +			goto no_host_limit;
> +		}
> +
> +		host_limit = rc;
> +	} else {
> +		rc = xenbus_scanf(XBT_NIL, "memory", "static-max",
> +							"%llu", &host_limit);
> +
> +		if (rc != 1) {
> +			pr_info("xen_balloon: %s: Guest domain target limit "
> +				"could not be established: %i\n", __func__, rc);
> +			goto no_host_limit;
> +		}
> +
> +		/*
> +		 * The given memory target limit value is in KiB, so it needs
> +		 * converting to pages. PAGE_SHIFT converts bytes to pages,
> +		 * hence PAGE_SHIFT - 10.
> +		 */
> +		host_limit >>= (PAGE_SHIFT - 10);
> +	}
> +
> +	/* Enforce hypervisor/host defined limit. */
> +	target = min(target, (unsigned long)host_limit);
> +
> +no_host_limit:
>  	/* No need for lock. Not read-modify-write updates. */
>  	balloon_stats.target_pages = target;
> +
>  	schedule_delayed_work(&balloon_worker, 0);
>  }
>  EXPORT_SYMBOL_GPL(balloon_set_new_target);
> -- 
> 1.7.10.4
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ