[<prev] [next>] [day] [month] [year] [list]
Message-ID: <8c497050-e425-49ea-b07c-b86575b9c63e@default>
Date: Tue, 9 Jan 2018 08:01:13 -0800 (PST)
From: Liran Alon <liran.alon@...cle.com>
To: <arjan@...ux.intel.com>
Cc: <jmattson@...gle.com>, <dwmw@...zon.co.uk>, <bp@...en8.de>,
<aliguori@...zon.com>, <thomas.lendacky@....com>,
<pbonzini@...hat.com>, <linux-kernel@...r.kernel.org>,
<kvm@...r.kernel.org>
Subject: Re: [PATCH 6/7] x86/svm: Set IBPB when running a different VCPU
----- arjan@...ux.intel.com wrote:
> >> I'm sorry I'm not familiar with your L0/L1/L2 terminology
> >> (maybe it's before coffee has had time to permeate the brain)
> >
> > These are standard terminology for guest levels:
> > L0 == hypervisor that runs on bare-metal
> > L1 == hypervisor that runs as L0 guest.
> > L2 == software that runs as L1 guest.
> > (We are talking about nested virtualization here)
>
> 1. I really really hope that the guests don't use IBRS but use
> retpoline. At least for Linux that is going to be the prefered
> approach.
>
> 2. For the CPU, there really is only "bare metal" vs "guest"; all
> guests are "guests" no matter how deeply nested. So for the language
> of privilege domains etc,
> nested guests equal their parent.
So in the scenario I mentioned above, would L1 use BTB/BHB entries inserted by L2?
To me it seems that it would if IBRS takes prediction-mode into consideration.
And therefore, we must issue IBPB when switching between L1 & L2.
Same as happen on nVMX when switching between vmcs01 & vmcs02.
-Liran
Powered by blists - more mailing lists