[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20190510174900.GB16852@linux.intel.com>
Date: Fri, 10 May 2019 10:49:00 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: wang.yi59@....com.cn
Cc: pbonzini@...hat.com, rkrcmar@...hat.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, hpa@...or.com, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] [next] KVM: lapic: allow set apic debug dynamically
On Fri, May 10, 2019 at 12:54:21PM +0800, wang.yi59@....com.cn wrote:
> I grep "debug" in arch/x86/kvm, and find these *_debug:
> ioapic_debug
> apic_debug
>
> and dbg in mmu.c, which is better to be renamed to mmu_debug as you said.
>
> and vcpu_debug, which uses kvm_debug macro.
>
> kvm_debug macro uses pr_debug which can be dynamically set during running
> time, so, how about change all *_debug in kvm to pr_debug like vcpu_debug?
It's still the same end result, we're bloating and slowing KVM with code
and conditionals that aren't useful in normal operation. grep vcpu_debug
a bit further and you'll see that the only uses in x86 are when the guest
has crashed, is being reset, or is accessing an unhandled MSR and KVM is
injecting a #GP in response. In other words, the existing uses are only
in code that isn't remotely performance critical.
hyperv.c: vcpu_debug(vcpu, "hv crash (0x%llx 0x%llx 0x%llx 0x%llx 0x%llx)\n",
hyperv.c: vcpu_debug(vcpu, "hyper-v reset requested\n");
x86.c: vcpu_debug_ratelimited(vcpu, "unhandled wrmsr: 0x%x data 0x%llx\n",
x86.c: vcpu_debug_ratelimited(vcpu, "unhandled rdmsr: 0x%x\n",
pr_debug does have more direct uses, notably in nested VMX and KVM TSC
handling. Similar to the above vcpu_debug case, the nVMX uses are all
failing paths and not performance critical. The TSC code does have one
path that may affect performance (get_kvmclock_ns()->kvm_get_time_scale()),
but I don't think that should be considered as setting the precedent. In
fact, it may make sense to convert the TSC pr_debugs to be gated by
CONFIG_DEBUG_KVM as well.
Paolo, do you have any thoughts?
Powered by blists - more mailing lists