lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Feb 2020 14:31:20 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     何容光(邦采) <bangcai.hrg@...baba-inc.com>
Cc:     namit <namit@...are.com>, peterz <peterz@...radead.org>,
        pbonzini <pbonzini@...hat.com>,
        "dave.hansen" <dave.hansen@...el.com>, mingo <mingo@...hat.com>,
        tglx <tglx@...utronix.de>, x86 <x86@...nel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        "dave.hansen" <dave.hansen@...ux.intel.com>, bp <bp@...en8.de>,
        luto <luto@...nel.org>, kvm <kvm@...r.kernel.org>,
        "yongting.lyt" <yongting.lyt@...baba-inc.com>,
        吴启翾(启翾) <qixuan.wqx@...baba-inc.com>
Subject: Re: [RFC] Question about async TLB flush and KVM pv tlb improvements

On Tue, 25 Feb 2020 at 12:12, 何容光(邦采) <bangcai.hrg@...baba-inc.com> wrote:
>
> Hi there,
>
> I saw this async TLB flush patch at https://lore.kernel.org/patchwork/patch/1082481/ , and I am wondering after one year, do you think if this patch is practical or there are functional flaws?
> From my POV, Nadav's patch seems has no obvious flaw. But I am not familiar about the relationship between CPU's speculation exec and stale TLB, since it's usually transparent from programing. In which condition would machine check occurs? Is there some reference I can learn?
> BTW, I am trying to improve kvm pv tlb flush that if a vCPU is preempted, as initiating CPU is not sending IPI to and waiting for the preempted vCPU, when the preempted vCPU is resuming, I want the VMM to inject an interrupt, perhaps NMI, to the vCPU and letting vCPU flush TLB instead of flush TLB for the vCPU, in case the vCPU is not in kernel mode or disabled interrupt, otherwise stick to VMM flush. Since VMM flush using INVVPID would flush all TLB of all PCID thus has some negative performance impacting on the preempted vCPU. So is there same problem as the async TLB flush patch?

PV TLB Shootdown is disabled in dedicated scenario, I believe there
are already heavy tlb misses in overcommit scenarios before this
feature, so flush all TLB associated with one specific VPID will not
worse that much.

    Wanpeng

Powered by blists - more mailing lists