lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130723190521.GA7073@phenom.dumpdata.com>
Date:	Tue, 23 Jul 2013 15:05:21 -0400
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Ian Campbell <ian.campbell@...rix.com>
Cc:	Stefano Stabellini <stefano.stabellini@...citrix.com>,
	xen-devel@...ts.xensource.com, alex@...x.org.uk,
	dcrisan@...xiant.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/2] xen/balloon: set a mapping for ballooned out pages

On Tue, Jul 23, 2013 at 07:00:09PM +0100, Ian Campbell wrote:
> On Tue, 2013-07-23 at 18:27 +0100, Stefano Stabellini wrote:
> > +static int __cpuinit balloon_cpu_notify(struct notifier_block *self,
> > +				    unsigned long action, void *hcpu)
> > +{
> > +	int cpu = (long)hcpu;
> > +	switch (action) {
> > +	case CPU_UP_PREPARE:
> > +		if (per_cpu(balloon_scratch_page, cpu) != NULL)
> > +			break;
> 
> Thinking about this a bit more -- do we know what happens to the per-cpu
> area for a CPU which is unplugged and then reintroduced? Is it preserved
> or is it reset?
> 
> If it is reset then this gets more complicated :-( We might be able to
> use the core mm page reference count, so that when the last reference is
> removed the page is automatically reclaimed. We can obviously take a
> reference whenever we add a mapping of the trade page, but I'm not sure
> we are always on the path which removes such mappings... Even then you
> could waste pages for some potentially large amount of time each time
> you replug a VCPU.
> 
> Urg, I really hope the per-cpu area is preserved!

It is. During bootup time you see this:

[    0.000000] smpboot: Allowing 128 CPUs, 96 hotplug CPU
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1

which means that all of the per_CPU are shrunk down to 128 (from 
CONFIG_NR_CPUS=512 was built with) and stays for the lifetime of the kernel.

You might have to clear it when the vCPU comes back up though - otherwise you
will have garbage.

Or you can use the zalloc_cpumask_var_node which will allocate a dynamic
version of this. (based on the possible_cpus - so in this case 128).
> 
> Ian.
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ