lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 May 2020 17:13:56 -0400
From:   Peter Xu <peterx@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH 8/9] KVM: x86, SVM: do not clobber guest DR6 on
 KVM_EXIT_DEBUG

On Wed, May 06, 2020 at 10:07:15PM +0200, Paolo Bonzini wrote:
> On 06/05/20 20:15, Peter Xu wrote:
> > On Wed, May 06, 2020 at 07:10:33AM -0400, Paolo Bonzini wrote:
> >> On Intel, #DB exceptions transmit the DR6 value via the exit qualification
> >> field of the VMCS, and the exit qualification only contains the description
> >> of the precise event that caused a vmexit.
> >>
> >> On AMD, instead the DR6 field of the VMCB is filled in as if the #DB exception
> >> was to be injected into the guest.  This has two effects when guest debugging
> >> is in use:
> >>
> >> * the guest DR6 is clobbered
> >>
> >> * the kvm_run->debug.arch.dr6 field can accumulate more debug events, rather
> >> than just the last one that happened.
> >>
> >> Fortunately, if guest debugging is in use we debug register reads and writes
> >> are always intercepted.  Now that the guest DR6 is always synchronized with
> >> vcpu->arch.dr6, we can just run the guest with an all-zero DR6 while guest
> >> debugging is enabled, and restore the guest value when it is disabled.  This
> >> fixes both problems.
> >>
> >> A testcase for the second issue is added in the next patch.
> > 
> > Is there supposed to be another test after this one, or the GD test?
> 
> It's the GD test.

Oh... so is dr6 going to have some leftover bit set in the GD test if without
this patch for AMD?  Btw, I noticed a small difference on Intel/AMD spec for
this case, e.g., B[0-3] definitions on such leftover bits...

Intel says:

        B0 through B3 (breakpoint condition detected) flags (bits 0 through 3)
        — Indicates (when set) that its associated breakpoint condition was met
        when a debug exception was generated. These flags are set if the
        condition described for each breakpoint by the LENn, and R/Wn flags in
        debug control register DR7 is true. They may or may not be set if the
        breakpoint is not enabled by the Ln or the Gn flags in register
        DR7. Therefore on a #DB, a debug handler should check only those B0-B3
        bits which correspond to an enabled breakpoint.

AMD says:

        Breakpoint-Condition Detected (B3–B0)—Bits 3:0. The processor updates
        these four bits on every debug breakpoint or general-detect
        condition. A bit is set to 1 if the corresponding address- breakpoint
        register detects an enabled breakpoint condition, as specified by the
        DR7 Ln, Gn, R/Wn and LENn controls, and is cleared to 0 otherwise. For
        example, B1 (bit 1) is set to 1 if an address- breakpoint condition is
        detected by DR1.

I'm not sure whether it means AMD B[0-3] bits are more strict on the Intel ones
(if so, then the selftest could be a bit too strict to VMX).

> >> +		/* This restores DR6 to all zeros.  */
> >> +		kvm_update_dr6(vcpu);
> > 
> > I feel like it won't work as expected for KVM_GUESTDBG_SINGLESTEP, because at
> > [2] below it'll go to the "else" instead so dr6 seems won't be cleared in that
> > case.
> 
> You're right, I need to cover both cases that trigger #DB.
> 
> > Another concern I have is that, I mostly read kvm_update_dr6() as "apply the
> > dr6 memory cache --> VMCB".  I'm worried this might confuse people (at least I
> > used quite a few minutes to digest...) here because latest data should already
> > be in the VMCB.
> 
> No, the latest guest register is always in vcpu->arch.dr6.  It's only
> because of KVM_DEBUGREG_WONT_EXIT that kvm_update_dr6() needs to pass
> vcpu->arch.dr6 to kvm_x86_ops.set_dr6.  Actually this patch could even
> check KVM_DEBUGREG_WONT_EXIT instead of vcpu->guest_debug.  I'll take a
> look tomorrow.

OK.

> 
> > Also, IMHO it would be fine to have invalid dr6 values during
> > KVM_SET_GUEST_DEBUG.  I'm not sure whether my understanding is correct, but I
> > see KVM_SET_GUEST_DEBUG needs to override the in-guest debug completely.
> 
> Sort of, userspace can try to juggle host and guest debugging (this is
> why you have KVM_GUESTDBG_INJECT_DB and KVM_GUESTDBG_INJECT_BP).

I see!

> 
> > If we worry about dr6 being incorrect after KVM_SET_GUEST_DEBUG is disabled,
> > IMHO we can reset dr6 in kvm_arch_vcpu_ioctl_set_guest_debug() properly before
> > we return the debug registers to the guest.
> > 
> > PS. I cannot see above lines [1] in my local tree (which seems to be really a
> > bugfix...).  I tried to use kvm/queue just in case I missed some patches, but I
> > still didn't see them.  So am I reading the wrong tree here?
> 
> The patch is based on kvm/master, and indeed that line is from a bugfix
> that I've posted yesterday ("KVM: SVM: fill in
> kvm_run->debug.arch.dr[67]"). I had pushed that one right away, because
> it was quite obviously suitable for 5.7.

Oh that's why it looks very familiar (because I read that patch.. :).  Then it
makes sense now.  Thanks!

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ