lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Oct 2012 00:09:12 +0000
From:	KY Srinivasan <kys@...rosoft.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"olaf@...fle.de" <olaf@...fle.de>,
	"apw@...onical.com" <apw@...onical.com>,
	"andi@...stfloor.org" <andi@...stfloor.org>
Subject: RE: [PATCH 2/2] Drivers: hv: Add Hyper-V balloon driver



> -----Original Message-----
> From: Andrew Morton [mailto:akpm@...ux-foundation.org]
> Sent: Tuesday, October 09, 2012 3:45 PM
> To: KY Srinivasan
> Cc: gregkh@...uxfoundation.org; linux-kernel@...r.kernel.org;
> devel@...uxdriverproject.org; olaf@...fle.de; apw@...onical.com;
> andi@...stfloor.org
> Subject: Re: [PATCH 2/2] Drivers: hv: Add Hyper-V balloon driver
> 
> On Sun,  7 Oct 2012 16:59:46 -0700
> "K. Y. Srinivasan" <kys@...rosoft.com> wrote:
> 
> > Add the basic balloon driver.
> 
> hm, how many balloon drivers does one kernel need?
> 
> Although I see that the great majority of this code is hypervisor-specific.
> 
> > Windows hosts dynamically manage the guest
> > memory allocation via a combination memory hot add and ballooning. Memory
> > hot add is used to grow the guest memory upto the maximum memory that can
> be
> > allocatted to the guest. Ballooning is used to both shrink as well as expand
> > up to the max memory. Supporting hot add needs additional support from the
> > host. We will support hot add when this support is available. For now,
> > by setting the VM startup memory to the VM  max memory, we can use
> > ballooning alone to dynamically manage memory allocation amongst
> > competing guests on a given host.
> >
> >
> > ...
> >
> > +static int  alloc_balloon_pages(struct hv_dynmem_device *dm, int
> num_pages,
> > +			 struct dm_balloon_response *bl_resp, int alloc_unit,
> > +			 bool *alloc_error)
> > +{
> > +	int i = 0;
> > +	struct page *pg;
> > +
> > +	if (num_pages < alloc_unit)
> > +		return 0;
> > +
> > +	for (i = 0; (i * alloc_unit) < num_pages; i++) {
> > +		if (bl_resp->hdr.size + sizeof(union dm_mem_page_range) >
> > +			PAGE_SIZE)
> > +			return i * alloc_unit;
> > +
> > +		pg = alloc_pages(GFP_HIGHUSER | __GFP_NORETRY |
> GFP_ATOMIC |
> > +				__GFP_NOMEMALLOC | __GFP_NOWARN,
> > +				get_order(alloc_unit << PAGE_SHIFT));
> 
> This choice of GFP flags is basically impossible to understand, so I
> suggest that a comment be added explaining it all.
> 
> I'm a bit surprised at the inclusion of GFP_ATOMIC as it will a) dip
> into page reserves, whcih might be undesirable and b) won't even
> reclaim clean pages, which seems desirable.  I suggest this also be
> covered in the forthcoming code comment.

I will rework these flags and add appropriate comments.

> 
> drivers/misc/vmw_balloon.c seems to me to have used better choices here.
> 
> > +		if (!pg) {
> > +			*alloc_error = true;
> > +			return i * alloc_unit;
> > +		}
> > +
> > +		totalram_pages -= alloc_unit;
> 
> Well, I'd consider totalram_pages to be an mm-private thing which drivers
> shouldn't muck with.  Why is this done?

By modifying the totalram_pages, the information presented in /proc/meminfo
correctly reflects what is currently assigned to the guest (MemTotal).
 
> 
> drivers/xen/balloon.c and drivers/virtio/virtio_balloon.c also alter
> totalram_pages, also without explaining why.
> drivers/misc/vmw_balloon.c does not.
> 
> > +		dm->num_pages_ballooned += alloc_unit;
> > +
> > +		bl_resp->range_count++;
> > +		bl_resp->range_array[i].finfo.start_page =
> > +			page_to_pfn(pg);
> > +		bl_resp->range_array[i].finfo.page_cnt = alloc_unit;
> > +		bl_resp->hdr.size += sizeof(union dm_mem_page_range);
> > +
> > +	}
> > +
> > +	return num_pages;
> > +}
> >
> > ...
> >
> 
> 
> 

Thanks for the prompt review. I will address your comments and repost the patches soon.
If it is ok with you, I am going to keep the code that manipulates totalram_pages 
(for reasons I listed above).

Regards,

K. Y

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ