[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220222111110.qe3bjqq6huomqqmi@black.fi.intel.com>
Date: Tue, 22 Feb 2022 14:11:10 +0300
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To: Dingji Li <dj_lee@...u.edu.cn>
Cc: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...el.com, luto@...nel.org, peterz@...radead.org,
sathyanarayanan.kuppuswamy@...ux.intel.com, aarcange@...hat.com,
ak@...ux.intel.com, dan.j.williams@...el.com, david@...hat.com,
hpa@...or.com, jgross@...e.com, jmattson@...gle.com,
joro@...tes.org, jpoimboe@...hat.com, knsathya@...nel.org,
pbonzini@...hat.com, sdeep@...are.com, seanjc@...gle.com,
tony.luck@...el.com, vkuznets@...hat.com, wanpengli@...cent.com,
x86@...nel.org, linux-kernel@...r.kernel.org,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCHv3 08/32] x86/traps: Add #VE support for TDX guest
On Tue, Feb 22, 2022 at 03:19:47PM +0800, Dingji Li wrote:
> Hi all,
>
> I hope it is appropriate to ask these questions here:
>
> I'm wondering if there are any performance comparisons available between
> TDX guests and VMX guests. The #VE processing adds non-trivial overhead
> to various VM exits, but how does it affect the performance of
> real-world applications? Existing patches have listed alternative
> methods to avoid the #VE in the first place, but there are trade-offs
> (e.g., bloated code, reduced generality). Besides, how much does the
> time spent in the TDX module affect VM exits / applications? (I guess
> the TDX module has a low overhead when compared to the #VE processing,
> but there is no public data.) Maybe some performance data can help make
> better trade-offs?
This is basic enabling of TDX guest support. The goal is to make TDX guest
functional. Yes, #VE handling adds non-trivial overhead and we have plan
to migrate it: there are patches in the queue that help to avoid bulk of
#VE, like replacing #VE-based MMIO with direct hypercalls. TDX will still
have performance penalty over plain VMX no matter what, but we aim to
minimize it.
I don't have any performance numbers to share at the moment.
--
Kirill A. Shutemov
Powered by blists - more mailing lists