lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 14 Dec 2015 10:58:00 -0500 From: Boris Ostrovsky <boris.ostrovsky@...cle.com> To: Roger Pau Monné <roger.pau@...rix.com>, Konrad Rzeszutek Wilk <konrad.wilk@...cle.com> Cc: 3.14+@...r.us.oracle.com, linux-kernel@...r.kernel.org, stable@...r.kernel.org, david.vrabel@...rix.com, jbeulich@...e.com, xen-devel@...ts.xenproject.org, #@...r.us.oracle.com Subject: Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op On 12/14/2015 10:35 AM, Roger Pau Monné wrote: > El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit: >> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote: >>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor >>> will likely perform same IPIs as would have the guest. >>> >> But if the VCPU is asleep, doing it via the hypervisor will save us waking >> up the guest VCPU, sending an IPI - just to do an TLB flush >> of that CPU. Which is pointless as the CPU hadn't been running the >> guest in the first place. OK, I then mis-read the hypervisor code, I didn't realize that vcpumask_to_pcpumask() takes into account vcpu_dirty_cpumask. >> >>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the >>> guest's address on remote CPU (when, for example, VCPU from another >>> guest >>> is running there). >> Right, so the hypervisor won't even send an IPI there. >> >> But if you do it via the normal guest IPI mechanism (which are opaque >> to the hypervisor) you and up scheduling the guest VCPU to do >> send an hypervisor callback. And the callback will go the IPI routine >> which will do an TLB flush. Not necessary. >> >> This is all in case of oversubscription of course. In the case where >> we are fine on vCPU resources it does not matter. >> >> Perhaps if we have PV aware TLB flush it could do this differently? > Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall? It doesn't take any parameters so it will invalidate TLBs for all VCPUs, which is more than is being asked for. Especially in the case of MMUEXT_INVLPG_MULTI. (That's in addition to the fact that it currently doesn't work for PVH as it has a test for is_hvm_domain() instead of has_hvm_container_domain()). -boris -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists