lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 20 Aug 2020 18:56:30 -0700 From: Sean Christopherson <sean.j.christopherson@...el.com> To: Tom Lendacky <thomas.lendacky@....com> Cc: Andy Lutomirski <luto@...nel.org>, Dave Hansen <dave.hansen@...el.com>, Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org>, Paolo Bonzini <pbonzini@...hat.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Wanpeng Li <wanpengli@...cent.com>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>, "Chang S. Bae" <chang.seok.bae@...el.com>, Thomas Gleixner <tglx@...utronix.de>, Sasha Levin <sashal@...nel.org>, Borislav Petkov <bp@...en8.de>, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org> Subject: Re: FSGSBASE causing panic on 5.9-rc1 On Thu, Aug 20, 2020 at 07:00:16PM -0500, Tom Lendacky wrote: > On 8/20/20 5:34 PM, Sean Christopherson wrote: > > On Thu, Aug 20, 2020 at 03:07:10PM -0700, Andy Lutomirski wrote: > > > On Thu, Aug 20, 2020 at 3:05 PM Sean Christopherson > > > <sean.j.christopherson@...el.com> wrote: > > > > > > > > On Thu, Aug 20, 2020 at 01:36:46PM -0700, Andy Lutomirski wrote: > > > > > > > > > > Depending on how much of a perf hit this is, we could also skip using RDPID > > > > > in the paranoid path on SVM-capable CPUs. > > > > > > > > Doesn't this affect VMX as well? KVM+VMX doesn't restore TSC_AUX until the > > > > kernel returns to userspace. I don't see anything that prevents the NMI > > > > RDPID path from affecting Intel CPUs. > > > > > > > > Assuming that's the case, I would strongly prefer this be handled in the > > > > paranoid path. NMIs are unblocked immediately on VMX VM-Exit, which means > > > > using the MSR load lists in the VMCS, and I hate those with a vengeance. > > > > > > > > Perf overhead on VMX would be 8-10% for VM-Exits that would normally stay > > > > in KVM's run loop, e.g. ~125 cycles for the WMRSR, ~1300-1500 cycles to > > > > handle the most common VM-Exits. It'd be even higher overhead for the > > > > VMX preemption timer, which is handled without even enabling IRQs and is > > > > a hot path as it's used to emulate the TSC deadline timer for the guest. > > > > > > I'm fine with that -- let's get rid of RDPID unconditionally in the > > > paranoid path. Want to send a patch that also adds as comment > > > explaining why we're not using RDPID? > > > > Sure, though I won't object if Tom beats me to the punch :-) > > I can do it, but won't be able to get to it until sometime tomorrow. Confirmed VMX goes kaboom when running perf with a VM. Patch incoming.
Powered by blists - more mailing lists