lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ys2wDjRAIVhXZjOh@google.com>
Date:   Tue, 12 Jul 2022 17:31:58 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Maxim Levitsky <mlevitsk@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        syzbot+760a73552f47a8cd0fd9@...kaller.appspotmail.com,
        Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
        Hou Wenlong <houwenlong.hwl@...group.com>
Subject: Re: [PATCH 2/3] KVM: x86: Set error code to segment selector on
 LLDT/LTR non-canonical #GP

On Tue, Jul 12, 2022, Maxim Levitsky wrote:
> On Mon, 2022-07-11 at 23:27 +0000, Sean Christopherson wrote:
> > When injecting a #GP on LLDT/LTR due to a non-canonical LDT/TSS base, set
> > the error code to the selector.  Intel SDM's says nothing about the #GP,
> > but AMD's APM explicitly states that both LLDT and LTR set the error code
> > to the selector, not zero.
> > 
> > Note, a non-canonical memory operand on LLDT/LTR does generate a #GP(0),
> > but the KVM code in question is specific to the base from the descriptor.
> > 
> > Fixes: e37a75a13cda ("KVM: x86: Emulator ignores LDTR/TR extended base on LLDT/LTR")
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> > ---
> >  arch/x86/kvm/emulate.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> > index 09e4b67b881f..bd9e9c5627d0 100644
> > --- a/arch/x86/kvm/emulate.c
> > +++ b/arch/x86/kvm/emulate.c
> > @@ -1736,8 +1736,8 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
> >                 if (ret != X86EMUL_CONTINUE)
> >                         return ret;
> >                 if (emul_is_noncanonical_address(get_desc_base(&seg_desc) |
> > -                               ((u64)base3 << 32), ctxt))
> > -                       return emulate_gp(ctxt, 0);
> > +                                                ((u64)base3 << 32), ctxt))
> > +                       return emulate_gp(ctxt, err_code);
> >         }
> >  
> >         if (seg == VCPU_SREG_TR) {
> 
> I guess this is the quote from AMD's manual (might we worth to add to the source?)

Eh, probably not worth it.  Anyone working on segmentation emulation is already
up to their eyeballs in the SDM/APM.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ