lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 02 Apr 2008 14:03:28 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Dave Hansen <dave@...ux.vnet.ibm.com>
CC:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Yasunori Goto <y-goto@...fujitsu.com>,
	Christoph Lameter <clameter@....com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Anthony Liguori <anthony@...emonkey.ws>,
	Mel Gorman <mel@....ul.ie>
Subject: Re: [PATCH RFC] hotplug-memory: refactor online_pages to separate
 zone growth from page onlining

Dave Hansen wrote:
> On Wed, 2008-04-02 at 11:52 -0700, Jeremy Fitzhardinge wrote:
>   
>>> Why don't we just have hotplug work that way?  When we want to take a
>>> guest from 1GB to 1GB+1 page (or whatever), we just hotplug the entire
>>> section (512MB or 1GB or whatever), actually online the whole thing,
>>> then make the balloon driver take it back to where it *should* be.  That
>>> way we're completely reusing existing components that have do be able to
>>> handle this case anyway.
>>>
>>> Yeah, this is suboptimal, an it has a possibility of fragmenting the
>>> memory, but it will only be used for the x86-32 case.
>>>   
>>>       
>> It also requires you actually have the memory on hand to populate the 
>> whole area.  512MB is still a significant chunk on a 2GB server; you may 
>> end up generating significant overall system memory pressure to scrape 
>> together the memory, only to immediately discard it again.
>>     
>
> That's a very good point.  Can we make it so that the hypervisors don't
> actually allocate the memory to the guest until its first touch?  If the
> pages are on the freelist, their *contents* shouldn't be touched at all
> during the onlining process.
>   

No, not in a Xen direct-pagetable guest.  The guest actually sees real 
hardware page numbers (mfns) when the hypervisor gives it a page.  By 
the time the hypervisor gives it a page reference, it already 
guaranteeing that the page is available for guest use.  The only thing 
that we could do is prevent the guest from mapping the page, but that 
doesn't really achieve much.

I think we're getting off track here; this is a lot of extra complexity 
to justify allowing usermode to use /sys to online a chunk of hotplugged 
memory.

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ