[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <660daad7-afb0-496d-9f40-a1162d5451e2.bangcai.hrg@alibaba-inc.com>
Date: Tue, 25 Feb 2020 15:53:17 +0800
From: "何容光(邦采)" <bangcai.hrg@...baba-inc.com>
To: "Wanpeng Li" <kernellwp@...il.com>
Cc: "namit" <namit@...are.com>, "peterz" <peterz@...radead.org>,
"pbonzini" <pbonzini@...hat.com>,
"dave.hansen" <dave.hansen@...el.com>, "mingo" <mingo@...hat.com>,
"tglx" <tglx@...utronix.de>, "x86" <x86@...nel.org>,
"linux-kernel" <linux-kernel@...r.kernel.org>,
"dave.hansen" <dave.hansen@...ux.intel.com>, "bp" <bp@...en8.de>,
"luto" <luto@...nel.org>, "kvm" <kvm@...r.kernel.org>,
"林永听(海枫)" <yongting.lyt@...baba-inc.com>,
"吴启翾(启翾)" <qixuan.wqx@...baba-inc.com>,
"herongguang" <herongguang@...ux.alibaba.com>
Subject: 回复:[RFC] Question about async TLB flush and KVM pv tlb improvements
> On Tue, 25 Feb 2020 at 12:12, 何容光(邦采) <bangcai.hrg@...baba-inc.com> wrote:
>>
>> Hi there,
>>
>> I saw this async TLB flush patch at https://lore.kernel.org/patchwork/patch/1082481/ , and I am wondering after one year, do you think if this patch is practical or there are functional flaws?
>> From my POV, Nadav's patch seems has no obvious flaw. But I am not familiar about the relationship between CPU's speculation exec and stale TLB, since it's usually transparent from programing. In which condition would machine check occurs? Is there some reference I can learn?
>> BTW, I am trying to improve kvm pv tlb flush that if a vCPU is preempted, as initiating CPU is not sending IPI to and waiting for the preempted vCPU, when the preempted vCPU is resuming, I want the VMM to inject an interrupt, perhaps NMI, to the vCPU and letting vCPU flush TLB instead of flush TLB for the vCPU, in case the vCPU is not in kernel mode or disabled interrupt, otherwise stick to VMM flush. Since VMM flush using INVVPID would flush all TLB of all PCID thus has some negative performance impacting on the preempted vCPU. So is there same problem as the async TLB flush patch?
> PV TLB Shootdown is disabled in dedicated scenario, I believe there
> are already heavy tlb misses in overcommit scenarios before this
> feature, so flush all TLB associated with one specific VPID will not
> worse that much.
If vcpus running on one pcpu is limited to a few, from my test, there
can still be some beneficial. Especially if we can move all the logic to
VMM eliminating waiting of IPI, however correctness of functionally
is a concern. This is also why I found Nadav's patch, do you have
any advice on this?
Powered by blists - more mailing lists