lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 02 Apr 2008 11:46:02 -0700
From:	Dave Hansen <dave@...ux.vnet.ibm.com>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Yasunori Goto <y-goto@...fujitsu.com>,
	Christoph Lameter <clameter@....com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Anthony Liguori <anthony@...emonkey.ws>,
	Mel Gorman <mel@....ul.ie>
Subject: Re: [PATCH RFC] hotplug-memory: refactor online_pages to separate
	zone growth from page onlining


On Mon, 2008-03-31 at 11:06 -0700, Jeremy Fitzhardinge wrote:
> >> That said, if (partial-)sections were much smaller - say 2-4 meg -
> and 
> >> page migration/defrag worked reliably, then we could probably do
> without 
> >> the balloon driver and do it all in terms of memory hot plug/unplug.  
> >> That would give us a general mechanism which could either be driven from 
> >> userspace, and/or have in-kernel Xen/kvm/s390/etc policy modules.  Aside 
> >> from small sections, the only additional requirement would be an online 
> >> hook which can actually attach backing memory to the pages being 
> >> onlined, rather than just assuming an underlying DIMM as current code does.
> >>     
> >
> > Even with 1MB sections
> 
> 1MB is too small.  It shouldn't be smaller than the size of a large page.

Oh, I was just using 1MB as an easy-to-do-math-on-a-napkin number. :)

> >  and a flat sparsemem map, you're only looking at
> > ~500k of overhead for the sparsemem storage.  Less if you use vmemmap.  
> >   
> 
> At the moment my concern is 32-bit x86, which doesn't support vmemmap or 
> sections smaller than 512MB because of the shortage of page flags bits.

Yeah, I forgot that we didn't have vmemmap on x86-32.  Ugh.

OK, here's another idea: Xen (and the balloon driver) already handle a
case where a guest boots up with 2GB of memory but only needs 1GB,
right?  It will balloon the guest down to 1GB from 2GB.

Why don't we just have hotplug work that way?  When we want to take a
guest from 1GB to 1GB+1 page (or whatever), we just hotplug the entire
section (512MB or 1GB or whatever), actually online the whole thing,
then make the balloon driver take it back to where it *should* be.  That
way we're completely reusing existing components that have do be able to
handle this case anyway.

Yeah, this is suboptimal, an it has a possibility of fragmenting the
memory, but it will only be used for the x86-32 case.

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ