[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d7c9f133-97a8-db07-0c56-36b8ae2fba3a@linux.intel.com>
Date: Tue, 9 Jan 2018 07:56:52 -0800
From: Arjan van de Ven <arjan@...ux.intel.com>
To: Liran Alon <liran.alon@...cle.com>
Cc: jmattson@...gle.com, dwmw@...zon.co.uk, bp@...en8.de,
thomas.lendacky@....com, aliguori@...zon.com, pbonzini@...hat.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 6/7] x86/svm: Set IBPB when running a different VCPU
>> I'm sorry I'm not familiar with your L0/L1/L2 terminology
>> (maybe it's before coffee has had time to permeate the brain)
>
> These are standard terminology for guest levels:
> L0 == hypervisor that runs on bare-metal
> L1 == hypervisor that runs as L0 guest.
> L2 == software that runs as L1 guest.
> (We are talking about nested virtualization here)
1. I really really hope that the guests don't use IBRS but use retpoline. At least for Linux that is going to be the prefered approach.
2. For the CPU, there really is only "bare metal" vs "guest"; all guests are "guests" no matter how deeply nested. So for the language of privilege domains etc,
nested guests equal their parent.
Powered by blists - more mailing lists