lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <6C7372DE-CF54-4EA0-90B1-724C77A81E9F@linux.vnet.ibm.com>
Date:   Wed, 20 Nov 2019 10:38:42 -0600
From:   Lijun Pan <ljp@...ux.vnet.ibm.com>
To:     Greg Kurz <groug@...d.org>
Cc:     Paul Mackerras <paulus@...abs.org>,
        Laurent Vivier <lvivier@...hat.com>,
        linux-kernel@...r.kernel.org, kvm-ppc@...r.kernel.org,
        Satheesh Rajendran <sathnaga@...ux.vnet.ibm.com>,
        Cédric Le Goater <clg@...d.org>,
        Lijun Pan <ljp@...ux.ibm.com>, stable@...r.kernel.org,
        linuxppc-dev@...ts.ozlabs.org,
        David Gibson <david@...son.dropbear.id.au>
Subject: Re: [PATCH v2 1/2] KVM: PPC: Book3S HV: XIVE: Free previous EQ page
 when setting up a new one



> On Nov 13, 2019, at 10:46 AM, Greg Kurz <groug@...d.org> wrote:
> 
> The EQ page is allocated by the guest and then passed to the hypervisor
> with the H_INT_SET_QUEUE_CONFIG hcall. A reference is taken on the page
> before handing it over to the HW. This reference is dropped either when
> the guest issues the H_INT_RESET hcall or when the KVM device is released.
> But, the guest can legitimately call H_INT_SET_QUEUE_CONFIG several times,
> either to reset the EQ (vCPU hot unplug) or to set a new EQ (guest reboot).
> In both cases the existing EQ page reference is leaked because we simply
> overwrite it in the XIVE queue structure without calling put_page().
> 
> This is especially visible when the guest memory is backed with huge pages:
> start a VM up to the guest userspace, either reboot it or unplug a vCPU,
> quit QEMU. The leak is observed by comparing the value of HugePages_Free in
> /proc/meminfo before and after the VM is run.
> 
> Ideally we'd want the XIVE code to handle the EQ page de-allocation at the
> platform level. This isn't the case right now because the various XIVE
> drivers have different allocation needs. It could maybe worth introducing
> hooks for this purpose instead of exposing XIVE internals to the drivers,
> but this is certainly a huge work to be done later.
> 
> In the meantime, for easier backport, fix both vCPU unplug and guest reboot
> leaks by introducing a wrapper around xive_native_configure_queue() that
> does the necessary cleanup.
> 
> Reported-by: Satheesh Rajendran <sathnaga@...ux.vnet.ibm.com>
> Cc: stable@...r.kernel.org # v5.2
> Fixes: 13ce3297c576 ("KVM: PPC: Book3S HV: XIVE: Add controls for the EQ configuration")
> Signed-off-by: Cédric Le Goater <clg@...d.org>
> Signed-off-by: Greg Kurz <groug@...d.org>

Tested-by: Lijun Pan <ljp@...ux.ibm.com>

> ---
> v2: use wrapper as suggested by Cedric
> ---
> arch/powerpc/kvm/book3s_xive_native.c |   31 ++++++++++++++++++++++---------
> 1 file changed, 22 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
> index 34bd123fa024..0e1fc5a16729 100644
> --- a/arch/powerpc/kvm/book3s_xive_native.c
> +++ b/arch/powerpc/kvm/book3s_xive_native.c
> @@ -50,6 +50,24 @@ static void kvmppc_xive_native_cleanup_queue(struct kvm_vcpu *vcpu, int prio)
> 	}
> }
> 
> +static int kvmppc_xive_native_configure_queue(u32 vp_id, struct xive_q *q,
> +					      u8 prio, __be32 *qpage,
> +					      u32 order, bool can_escalate)
> +{
> +	int rc;
> +	__be32 *qpage_prev = q->qpage;
> +
> +	rc = xive_native_configure_queue(vp_id, q, prio, qpage, order,
> +					 can_escalate);
> +	if (rc)
> +		return rc;
> +
> +	if (qpage_prev)
> +		put_page(virt_to_page(qpage_prev));
> +
> +	return rc;
> +}
> +
> void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu)
> {
> 	struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu;
> @@ -575,19 +593,14 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
> 		q->guest_qaddr  = 0;
> 		q->guest_qshift = 0;
> 
> -		rc = xive_native_configure_queue(xc->vp_id, q, priority,
> -						 NULL, 0, true);
> +		rc = kvmppc_xive_native_configure_queue(xc->vp_id, q, priority,
> +							NULL, 0, true);
> 		if (rc) {
> 			pr_err("Failed to reset queue %d for VCPU %d: %d\n",
> 			       priority, xc->server_num, rc);
> 			return rc;
> 		}
> 
> -		if (q->qpage) {
> -			put_page(virt_to_page(q->qpage));
> -			q->qpage = NULL;
> -		}
> -
> 		return 0;
> 	}
> 
> @@ -646,8 +659,8 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
> 	  * OPAL level because the use of END ESBs is not supported by
> 	  * Linux.
> 	  */
> -	rc = xive_native_configure_queue(xc->vp_id, q, priority,
> -					 (__be32 *) qaddr, kvm_eq.qshift, true);
> +	rc = kvmppc_xive_native_configure_queue(xc->vp_id, q, priority,
> +					(__be32 *) qaddr, kvm_eq.qshift, true);
> 	if (rc) {
> 		pr_err("Failed to configure queue %d for VCPU %d: %d\n",
> 		       priority, xc->server_num, rc);
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ