[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5538CC15.4010005@redhat.com>
Date: Thu, 23 Apr 2015 12:40:21 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Liang Li <liang.z.li@...el.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
CC: gleb@...nel.org, Marcelo Tosatti <mtosatti@...hat.com>,
tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
x86@...nel.org, joro@...tes.org, yang.z.zhang@...el.com,
Xudong Hao <xudong.hao@...el.com>
Subject: Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU
On 23/04/2015 23:13, Liang Li wrote:
> Romove lazy FPU logic and use eager FPU entirely. Eager FPU does
> not have performance regression, and it can simplify the code.
>
> When compiling kernel on westmere, the performance of eager FPU
> is about 0.4% faster than lazy FPU.
>
> Signed-off-by: Liang Li <liang.z.li@...el.com>
> Signed-off-by: Xudong Hao <xudong.hao@...el.com>
A patch like this requires much more benchmarking than what you have done.
First, what guest did you use? A modern Linux guest will hardly ever exit
to userspace: the scheduler uses the TSC deadline timer, which is handled
in the kernel; the clocksource uses the TSC; virtio-blk devices are kicked
via ioeventfd.
What happens if you time a Windows guest (without any Hyper-V enlightenments),
or if you use clocksource=acpi_pm?
Second, "0.4%" by itself may not be statistically significant. How did
you gather the result? How many times did you run the benchmark? Did
the guest report any stolen time?
And finally, even if the patch was indeed a performance improvement,
there is much more that you can remove. fpu_active is always 1,
vmx_fpu_activate only has one call site that can be simplified just to
vcpu->arch.cr0_guest_owned_bits = X86_CR0_TS;
vmcs_writel(CR0_GUEST_HOST_MASK, ~vcpu->arch.cr0_guest_owned_bits);
and so on.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists