lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1367329458.3142.524.camel@zakaz.uk.xensource.com>
Date:	Tue, 30 Apr 2013 14:44:18 +0100
From:	Ian Campbell <ijc@...lion.org.uk>
To:	Daniel Kiper <daniel.kiper@...cle.com>
Cc:	"carsten@...iers.de" <carsten@...iers.de>,
	"darren.s.shepherd@...il.com" <darren.s.shepherd@...il.com>,
	David Vrabel <david.vrabel@...rix.com>,
	Ian Jackson <Ian.Jackson@...citrix.com>,
	"james-xen@...gwall.me.uk" <james-xen@...gwall.me.uk>,
	"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>
Subject: Re: [PATCH v2 2/2] xen/balloon: Enforce various limits on target

On Tue, 2013-04-30 at 13:59 +0100, Daniel Kiper wrote:
> On Mon, Apr 29, 2013 at 03:44:09PM +0100, Ian Campbell wrote:
> > On Mon, 2013-04-29 at 12:37 +0100, Daniel Kiper wrote:
> >
> > > This patch enforces on target limit statically defined in Linux Kernel
> > > source and limit defined by hypervisor or host. This way the balloon
> > > driver should not attempt to populate pages above given limits
> > > because they may fail.
> > >
> > > Particularly this patch fixes bug which led to flood
> > > of dom0 kernel log with messages similar to:
> > >
> > > System RAM resource [mem 0x1b8000000-0x1bfffffff] cannot be added
> > > xen_balloon: reserve_additional_memory: add_memory() failed: -17
> >
> > I think it would be OK to simply tone down this message (and perhaps add
> > the failed pages to the balloon, if that makes sense). This isn't
> > dissimilar to increase_reservation failing.
> 
> If add_memory() fails it is hard error. It means that we do not
> know where new or ballooned pages should be placed.

I see that add_memory() is a generic or arch level function rather than
a ballooning specific one. Under what circumstances can it fail and how
do they relate the the setting of the balloon target?

> > > +/*
> > > + * Extra internal memory reserved by libxl.
> > > + * Check tools/libxl/libxl_memory.txt file in Xen source for more details.
> > > + */
> > > +#define LIBXL_MAXMEM_CONSTANT_PAGES	(1024 * 1024 / PAGE_SIZE)
> >
> > I think we need to find a way to achieve your aims which doesn't require
> > leaking internal implementation details of libxl into the guest kernels.
> > What happens if libxl decides to double this?
> 
> I agree that this is not elegant solution. However, if we would like to
> be in line with docs/misc/libxl_memory.txt (this is correct path) this
> is a must.

I'm not sure about this, that file describes the toolstacks view of the
memory in a system. That need not necessarily correspond with the
guest's ideas (although you would hope it would be a superset).

Surely it is logically wrong to bake toolstack specific knowledge in the
guest? If someone can describe a meaningful semantic for this number in
a toolstack independent way then perhaps it would be appropriate to do
something with it. I've no idea, having looked at both the document and
the code, what this value actually is.

>  Once I thought that this value could be passed via xenstore
> but I think it is rather small chance it would be changed in near
> future. As I know this slack is reserved now just in case (correct me
> if I am wrong). If this value will be changed we could pass new value
> via xenstore (or other convenient mechanism).
> 
> > > +
> > >  #ifdef CONFIG_HIGHMEM
> > >  #define inc_totalhigh_pages() (totalhigh_pages++)
> > >  #define dec_totalhigh_pages() (totalhigh_pages--)
> > > @@ -491,11 +496,42 @@ static void balloon_process(struct work_struct *work)
> > >  	mutex_unlock(&balloon_mutex);
> > >  }
> > >
> > > -/* Resets the Xen limit, sets new target, and kicks off processing. */
> > > +/* Enforce limits, set new target and kick off processing. */
> > >  void balloon_set_new_target(unsigned long target)
> > >  {
> > > +	domid_t domid = DOMID_SELF;
> > > +	int rc;
> > > +
> > > +	/* Enforce statically defined limit. */
> > > +	target = min(target, MAX_DOMAIN_PAGES);
> > > +
> > > +	rc = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> > > +
> > > +	if (xen_initial_domain()) {
> > > +		if (rc <= 0) {
> > > +			pr_debug("xen_balloon: %s: Initial domain target limit "
> > > +					"could not be established: %i\n",
> > > +					__func__, rc);
> > > +			goto no_host_limit;
> > > +		}
> > > +	} else {
> > > +		if (rc <= 0) {
> > > +			pr_info("xen_balloon: %s: Guest domain target limit "
> > > +				"could not be established: %i\n", __func__, rc);
> > > +			goto no_host_limit;
> > > +		}
> > > +
> > > +		/* Do not take into account memory reserved for internal stuff. */
> > > +		rc -= LIBXL_MAXMEM_CONSTANT_PAGES;
> > > +	}
> >
> > Why is this needed? Wouldn't it be a toolstack bug to set the target
> > greater than this limit? But if it did ask then it would no doubt be
> > expecting the guest to try and reach that limit (perhaps it intends to
> > raise the maximum later?).
> 
> For domU XENMEM_maximum_reservation is always equal
> <user_requested_maximum> + LIBXL_MAXMEM_CONSTANT_PAGES.
> Acording to docs/misc/libxl_memory.txt LIBXL_MAXMEM_CONSTANT_PAGES
> is reserved for extra internal. It means that we should not allow
> balloon driver to reserve more than user_requested_maximum.

Why not? What is the downside of reserving a little more than we should?
If the toolstack cares then it will presumably never set a target which
exceeds <user_requested_maximum>.

Ian.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ