lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180328203036.GK26753@flask>
Date:   Wed, 28 Mar 2018 22:30:37 +0200
From:   Radim Krčmář <rkrcmar@...hat.com>
To:     Babu Moger <babu.moger@....com>
Cc:     joro@...tes.org, tglx@...utronix.de, mingo@...hat.com,
        hpa@...or.com, x86@...nel.org, pbonzini@...hat.com,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 5/5] KVM: SVM: Implement pause loop exit logic in SVM

2018-03-16 16:37-0400, Babu Moger:
> Bring the PLE(pause loop exit) logic to AMD svm driver.
> 
> While testing, we found this helping in situations where numerous
> pauses are generated. Without these patches we could see continuos
> VMEXITS due to pause interceptions. Tested it on AMD EPYC server with
> boot parameter idle=poll on a VM with 32 vcpus to simulate extensive
> pause behaviour. Here are VMEXITS in 10 seconds interval.
> 
> #VMEXITS 	Before the change  After the change
> Pauses                  810199                  504
> Total                   882184                  325415
> 
> Signed-off-by: Babu Moger <babu.moger@....com>
> ---
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> @@ -1046,6 +1094,42 @@ static int avic_ga_log_notifier(u32 ga_tag)
>  	return 0;
>  }
>  
> +static void grow_ple_window(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_svm *svm = to_svm(vcpu);
> +	struct vmcb_control_area *control = &svm->vmcb->control;
> +	int old = control->pause_filter_count;
> +
> +	control->pause_filter_count = __grow_ple_window(old,
> +							pause_filter_count,
> +							pause_filter_count_grow,
> +							pause_filter_count_max);
> +
> +	if (control->pause_filter_count != old)
> +		mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
> +
> +	trace_kvm_ple_window_grow(vcpu->vcpu_id,
> +				  control->pause_filter_count, old);
> +}
> +
> +static void shrink_ple_window(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_svm *svm = to_svm(vcpu);
> +	struct vmcb_control_area *control = &svm->vmcb->control;
> +	int old = control->pause_filter_count;
> +
> +	control->pause_filter_count =
> +				__shrink_ple_window(old,
> +						    pause_filter_count,
> +						    pause_filter_count_shrink,
> +						    0);

I've used pause_filter_count as minumum here as well and in all patches
used 'unsigned int' instead of 'uint' in the code too match the rest of
the kernel.

The series is in kvm/queue, please look at the changes and tell me if
you'd like something done differently, thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ