lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150612112026.GB24274@localhost>
Date:	Fri, 12 Jun 2015 13:20:26 +0200
From:	"dmitry.torokhov@...il.com" <dmitry.torokhov@...il.com>
To:	Philip Moltmann <moltmann@...are.com>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"pv-drivers@...are.com" <pv-drivers@...are.com>,
	Xavier Deguillard <xdeguillard@...are.com>,
	"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: Re: [PATCH 6/9] VMware balloon: Do not limit the amount of frees and
 allocations in non-sleep mode.

Hi Philip,

On Thu, Jun 11, 2015 at 08:10:07PM +0000, Philip Moltmann wrote:
> Hi,
> 
> sorry for taking so long to address your concerns.
> 
> > What happens if you run this new driver on an older hypervisor that 
> > does not support batched operations?
> 
> When the driver starts or when it gets reset the driver checks for the
> capabilities of the hypervisor in vmballoon_send_start. Then it resets
> it state and only uses the available functionality.
> 
> A reset happens any time the VM get hot migrated, snapshotted, resumed,
> etc.
> 
> I tested this driver on various versions of ESXi to have a full set of
> possible capabilities.

I understand that you negotiate the capabilities between hypervisor and
the balloon driver, however that was not my concern (and I am sorry that
I did not express it properly).

The patch description stated:

"Before this patch the slow memory transfer would cause the destination
VM to have internal swapping until all memory is transferred. Now the
memory is transferred fast enough so that the destination VM does not
swap."

As far as I understand the improvements in memory transfer speed hinge
on the availability of batched operations, you however remove the limits
on non-sleep allocations unconditionally. Thus my question: on older
ESXi's that do not support batcher operations won't this cause VM to
start swapping?

Thanks.

-- 
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ