[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190514072941.GG2589@hirez.programming.kicks-ass.net>
Date: Tue, 14 May 2019 09:29:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Liran Alon <liran.alon@...cle.com>
Cc: Andy Lutomirski <luto@...nel.org>,
Alexandre Chartre <alexandre.chartre@...cle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krcmar <rkrcmar@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
kvm list <kvm@...r.kernel.org>, X86 ML <x86@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
jan.setjeeilers@...cle.com, Jonathan Adams <jwadams@...gle.com>
Subject: Re: [RFC KVM 00/27] KVM Address Space Isolation
(please, wrap our emails at 78 chars)
On Tue, May 14, 2019 at 12:08:23AM +0300, Liran Alon wrote:
> 3) From (2), we should have theoretically deduced that for every
> #VMExit, there is a need to kick the sibling hyperthread also outside
> of guest until the #VMExit is completed.
That's not in fact quite true; all you have to do is send the IPI.
Having one sibling IPI the other sibling carries enough guarantees that
the receiving sibling will not execute any further guest instructions.
That is, you don't have to wait on the VMExit to complete; you can just
IPI and get on with things. Now, this is still expensive, But it is
heaps better than doing a full sync up between siblings.
Powered by blists - more mailing lists