[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZN4yMMdB3oGnliqa@google.com>
Date: Thu, 17 Aug 2023 07:44:00 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
pbonzini@...hat.com, chao.gao@...el.com, kai.huang@...el.com,
David.Laight@...lab.com, robert.hu@...ux.intel.com,
guang.zeng@...el.com
Subject: Re: [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr()
for LAM
On Thu, Aug 17, 2023, Binbin Wu wrote:
>
>
> On 8/17/2023 6:01 AM, Sean Christopherson wrote:
> > On Wed, Jul 19, 2023, Binbin Wu wrote:
> > > + return (sign_extend64(gva, lam_bit) & ~BIT_ULL(63)) | (gva & BIT_ULL(63));
> > Almost forgot. Please add a comment explaning how LAM untags the address,
> > specifically the whole bit 63 preservation. The logic is actually straightforward,
> > but the above looks way more complex than it actually is. This?
> >
> > /*
> > * Untag the address by sign-extending the LAM bit, but NOT to bit 63.
> > * Bit 63 is retained from the raw virtual address so that untagging
> > * doesn't change a user access to a supervisor access, and vice versa.
> > */
> OK.
>
> Besides it, I find I forgot adding the comments for the function. I will add
> it back if you don't object.
>
> +/*
> + * Only called in 64-bit mode.
This is no longer true.
> + *
> + * LAM has a modified canonical check when applicable:
> + * LAM_S48 : [ 1 ][ metadata ][ 1 ]
> + * 63 47
> + * LAM_U48 : [ 0 ][ metadata ][ 0 ]
> + * 63 47
> + * LAM_S57 : [ 1 ][ metadata ][ 1 ]
> + * 63 56
> + * LAM_U57 + 5-lvl paging : [ 0 ][ metadata ][ 0 ]
> + * 63 56
> + * LAM_U57 + 4-lvl paging : [ 0 ][ metadata ][ 0...0 ]
> + * 63 56..47
I vote to not include the table, IMO it does more harm than good, e.g. I only
understood what the last U57+4-lvl entry is conveying after reading the same
figure in the ISE. Again, the concept of masking bits 62:{56,47} is quite
straightforward, and that's what this function handles. The gory details of
userspace not
> + * Note that KVM masks the metadata in addresses, performs the (original)
> + * canonicality checking and then walks page table. This is slightly
> + * different from hardware behavior but achieves the same effect.
> + * Specifically, if LAM is enabled, the processor performs a modified
> + * canonicality checking where the metadata are ignored instead of
> + * masked. After the modified canonicality checking, the processor masks
> + * the metadata before passing addresses for paging translation.
Please drop this. I don't think we can extrapolate exact hardware behavior from
the ISE blurbs that say the masking is applied after the modified canonicality
check. Hardware/ucode could very well take the exact same approach as KVM, all
that matters is that the behavior is architecturally correct.
If we're concerned about the blurbs saying the masking is performed *after* the
canonicality checks, e.g. this
After this modified canonicality check is performed, bits 62:48 are masked by
sign-extending the value of bit 47 (1)
then the comment should focus on whether or not KVM adheres to the architecture
(SDM), e.g.
/*
* Note, the SDM states that the linear address is masked *after* the modified
* canonicality check, whereas KVM masks (untags) the address and then performs
* a "normal" canonicality check. Functionally, the two methods are identical,
* and when the masking occurs relative to the canonicality check isn't visible
* to software, i.e. KVM's behavior doesn't violate the SDM.
*/
Powered by blists - more mailing lists