lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y8cLcY12zDWqO8nd@google.com>
Date:   Tue, 17 Jan 2023 20:56:17 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Zhi Wang <zhi.wang.linux@...il.com>
Cc:     isaku.yamahata@...el.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, isaku.yamahata@...il.com,
        Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
        Sagi Shahar <sagis@...gle.com>,
        David Matlack <dmatlack@...gle.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Kai Huang <kai.huang@...el.com>
Subject: Re: [PATCH v11 018/113] KVM: TDX: create/destroy VM structure

On Tue, Jan 17, 2023, Zhi Wang wrote:
> On Tue, 17 Jan 2023 15:55:53 +0000
> Sean Christopherson <seanjc@...gle.com> wrote:
> 
> > On Sat, Jan 14, 2023, Zhi Wang wrote:
> > > On Fri, 13 Jan 2023 15:16:08 +0000 > Sean Christopherson <seanjc@...gle.com> wrote:
> > > 
> > > > On Fri, Jan 13, 2023, Zhi Wang wrote:
> > > > > Better add a FIXME: here as this has to be fixed later.
> > > > 
> > > > No, leaking the page is all KVM can reasonably do here.  An improved
> > > > comment would be helpful, but no code change is required.
> > > > tdx_reclaim_page() returns an error if and only if there's an
> > > > unexpected, fatal error, e.g. a SEAMCALL with bad params, incorrect
> > > > concurrency in KVM, a TDX Module bug, etc.  Retrying at a later point is
> > > > highly unlikely to be successful.
> > > 
> > > Hi:
> > > 
> > > The word "leaking" sounds like a situation left unhandled temporarily.
> > > 
> > > I checked the source code of the TDX module[1] for the possible reason to
> > > fail when reviewing this patch:
> > > 
> > > tdx-module-v1.0.01.01.zip\src\vmm_dispatcher\api_calls\tdh_phymem_page_reclaim.c
> > > tdx-module-v1.0.01.01.zip\src\vmm_dispatcher\api_calls\tdh_phymem_page_wbinvd.c
> > > 
> > > a. Invalid parameters. For example, page is not aligned, PA HKID is not zero...
> > > 
> > > For invalid parameters, a WARN_ON_ONCE() + return value is good enough as
> > > that is how kernel handles similar situations. The caller takes the
> > > responsibility.
> > >  
> > > b. Locks has been taken in TDX module. TDR page has been locked due to another
> > > SEAMCALL, another SEAMCALL is doing PAMT walk and holding PAMT lock... 
> > > 
> > > This needs to be improved later either by retry or taking tdx_lock to avoid
> > > TDX module fails on this.
> > 
> > No, tdx_reclaim_page() already retries TDH.PHYMEM.PAGE.RECLAIM if the target page
> > is contended (though I'd question the validity of even that), and TDH.PHYMEM.PAGE.WBINVD
> > is performed only when reclaiming the TDR.  If there's contention when reclaiming
> > the TDR, then KVM effectively has a use-after-free bug, i.e. leaking the page is
> > the least of our worries.
> > 
> 
> Hi:
> 
> Thanks for the reply. "Leaking" is the consquence of even failing in retry. I
> agree with this. But I was questioning if "retry" is really a correct and only
> solution when encountering lock contention in the TDX module as I saw that there
> are quite some magic numbers are going to be introduced because of "retry" and
> there were discussions about times of retry should be 3 or 1000 in TDX guest
> on hyper-V patches. It doesn't sound right.

Ah, yeah, I'm speaking only with respect to leaking pages on failure in this
specific scenario.

> Compare to an typical *kernel lock* case, an execution path can wait on a
> waitqueue and later will be woken up. We usually do contention-wait-and-retry
> and we rarely just do contention and retry X times. In TDX case, I understand
> that it is hard for the TDX module to provide similar solutions as an execution
> path can't stay long in the TDX module.

Let me preface the below comments by saying that this is the first time that I've
seen the "Single-Step and Zero-Step Attacks Mitigation Mechanisms" behavior, i.e.
the first time I've been made aware that the TDX Module can apparently decide
to take S-EPT locks in the VM-Enter path.

> 
> 1) We can always take tdx_lock (linux kernel lock) when calling a SEAMCALL
> that touch the TDX internal locks. But the downside is we might lose some
> concurrency.

This isn't really feasible in practice.  Even if tdx_lock were made a spinlock
(it's currently a mutex) so that it could it could be taken inside kvm->mmu_lock,
acquiring a per-VM lock, let alone a global lock, in KVM's page fault handling
path is not an option.  KVM has a hard requirement these days of being able to
handle multiple page faults in parallel.

> 2) As TDX module doesn't provide contention-and-wait, I guess the following
> approach might have been discussed when designing this "retry".
> 
> KERNEL                          TDX MODULE
> 
> SEAMCALL A   ->                 PATH A: Taking locks
> 
> SEAMCALL B   ->                 PATH B: Contention on a lock
> 
>              <-                 Return "operand busy"
> 
> SEAMCALL B   -|
>               |  <- Wait on a kernel waitqueue
> SEAMCALL B  <-|
> 
> SEAMCALL A   <-                 PATH A: Return
> 
> SEAMCALL A   -|
>               |  <- Wake up the waitqueue
> SEMACALL A  <-| 
> 
> SEAMCALL B  ->                  PATH B: Taking the locks
> ...
> 
> Why not this scheme wasn't chosen?

AFAIK, I don't think a waitqueue approach as ever been discussed publicly.  Intel
may have considered the idea internally, but I don't recall anything being proposed
publically (though it's entirely possible I just missed the discussion).

Anways, I don't think a waitqueue would be a good fit, at least not for S-EPT
management, which AFAICT is the only scenario where KVM does the arbitrary "retry
X times and hope things work".  If the contention occurs due to the TDX Module
taking an S-EPT lock in VM-Enter, then KVM won't get a chance to do the "Wake up
the waitqueue" action until the next VM-Exit, which IIUC is well after the TDX
Module drops the S-EPT lock.  In other words, immediately retrying and then punting
the problem further up the stack in KVM does seem to be the least awful "solution"
if there's contention.

That said, I 100% agree that the arbitrary retry stuff is awful.  The zero-step
interaction in particular isn't acceptable.

Intel folks, encountering "TDX_OPERAND_BUSY | TDX_OPERAND_ID_SEPT" on VM-Enter
needs to be treated as a KVM bug, even if it means teaching KVM to kill the VM
if a vCPU is on the cusp of triggerring the "pre-determined number" of EPT faults
mentioned in this snippet:

  After a pre-determined number of such EPT violations occur on the same instruction,
  the TDX module starts tracking the GPAs that caused Secure EPT faults and fails
  further host VMM attempts to enter the TD VCPU unless previously faulting private
  GPAs are properly mapped in the Secure EPT.

If the "pre-determined number" is too low to avoid false positives, e.g. if it can
be tripped by a vCPU spinning while a different vCPU finishes handling a fault,
then either the behavior of the TDX Module needs to be revisited, or KVM needs to
stall vCPUs that are approaching the threshold until all outstanding S-EPT faults
have been serviced.

KVM shouldn't be spuriously zapping private S-EPT entries since the backing memory
is pinned, which means the only way for a vCPU to truly get stuck faulting on a
single instruction is if userspace is broken/malicious, in which case userspace
gets to keep the pieces.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ