lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <07348bb2-c8a5-41d0-afca-26c1056570a5.bangcai.hrg@alibaba-inc.com>
Date:   Tue, 25 Feb 2020 12:12:11 +0800
From:   "何容光(邦采)" <bangcai.hrg@...baba-inc.com>
To:     "namit" <namit@...are.com>, "peterz" <peterz@...radead.org>,
        "kernellwp" <kernellwp@...il.com>, "pbonzini" <pbonzini@...hat.com>
Cc:     "dave.hansen" <dave.hansen@...el.com>, "mingo" <mingo@...hat.com>,
        "tglx" <tglx@...utronix.de>, "x86" <x86@...nel.org>,
        "linux-kernel" <linux-kernel@...r.kernel.org>,
        "dave.hansen" <dave.hansen@...ux.intel.com>, "bp" <bp@...en8.de>,
        "luto" <luto@...nel.org>, "kvm" <kvm@...r.kernel.org>,
        "yongting.lyt" <yongting.lyt@...baba-inc.com>,
        "吴启翾(启翾)" <qixuan.wqx@...baba-inc.com>
Subject: [RFC] Question about async TLB flush and KVM pv tlb improvements

Hi there,

I saw this async TLB flush patch at https://lore.kernel.org/patchwork/patch/1082481/ , and I am wondering after one year, do you think if this patch is practical or there are functional flaws?
From my POV, Nadav's patch seems has no obvious flaw. But I am not familiar about the relationship between CPU's speculation exec and stale TLB, since it's usually transparent from programing. In which condition would machine check occurs? Is there some reference I can learn?
BTW, I am trying to improve kvm pv tlb flush that if a vCPU is preempted, as initiating CPU is not sending IPI to and waiting for the preempted vCPU, when the preempted vCPU is resuming, I want the VMM to inject an interrupt, perhaps NMI, to the vCPU and letting vCPU flush TLB instead of flush TLB for the vCPU, in case the vCPU is not in kernel mode or disabled interrupt, otherwise stick to VMM flush. Since VMM flush using INVVPID would flush all TLB of all PCID thus has some negative performance impacting on the preempted vCPU. So is there same problem as the async TLB flush patch?
Thanks in advance.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ