lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Jan 2021 09:19:19 -0800
From:   Sean Christopherson <seanjc@...gle.com>
To:     Tom Lendacky <thomas.lendacky@....com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, Borislav Petkov <bp@...e.de>,
        Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH v2 14/14] KVM: SVM: Skip SEV cache flush if no ASIDs have
 been used

On Fri, Jan 15, 2021, Tom Lendacky wrote:
> On 1/13/21 6:37 PM, Sean Christopherson wrote:
> > Skip SEV's expensive WBINVD and DF_FLUSH if there are no SEV ASIDs
> > waiting to be reclaimed, e.g. if SEV was never used.  This "fixes" an
> > issue where the DF_FLUSH fails during hardware teardown if the original
> > SEV_INIT failed.  Ideally, SEV wouldn't be marked as enabled in KVM if
> > SEV_INIT fails, but that's a problem for another day.
> > 
> > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> > ---
> >   arch/x86/kvm/svm/sev.c | 22 ++++++++++------------
> >   1 file changed, 10 insertions(+), 12 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index 23a4bead4a82..e71bc742d8da 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -56,9 +56,14 @@ struct enc_region {
> >   	unsigned long size;
> >   };
> > -static int sev_flush_asids(void)
> > +static int sev_flush_asids(int min_asid, int max_asid)
> >   {
> > -	int ret, error = 0;
> > +	int ret, pos, error = 0;
> > +
> > +	/* Check if there are any ASIDs to reclaim before performing a flush */
> > +	pos = find_next_bit(sev_reclaim_asid_bitmap, max_sev_asid, min_asid);
> > +	if (pos >= max_asid)
> > +		return -EBUSY;
> >   	/*
> >   	 * DEACTIVATE will clear the WBINVD indicator causing DF_FLUSH to fail,
> > @@ -80,14 +85,7 @@ static int sev_flush_asids(void)
> >   /* Must be called with the sev_bitmap_lock held */
> >   static bool __sev_recycle_asids(int min_asid, int max_asid)
> >   {
> > -	int pos;
> > -
> > -	/* Check if there are any ASIDs to reclaim before performing a flush */
> > -	pos = find_next_bit(sev_reclaim_asid_bitmap, max_sev_asid, min_asid);
> > -	if (pos >= max_asid)
> > -		return false;
> > -
> > -	if (sev_flush_asids())
> > +	if (sev_flush_asids(min_asid, max_asid))
> >   		return false;
> >   	/* The flush process will flush all reclaimable SEV and SEV-ES ASIDs */
> > @@ -1323,10 +1321,10 @@ void sev_hardware_teardown(void)
> >   	if (!sev_enabled)
> >   		return;
> > +	sev_flush_asids(0, max_sev_asid);
> 
> I guess you could have called __sev_recycle_asids(0, max_sev_asid) here and
> left things unchanged up above. It would do the extra bitmap_xor() and
> bitmap_zero() operations, though. What do you think?

IMO, calling "recycle" from the teardown flow would be confusing without a
comment to explain that it's the flush that we really care about, and at that
point it's hard to argue against calling "flush" directly.

The cost of the extra operations is almost certainly negligible, but similar to
above it will leave readers wonder why the teardown flow bothers to xor/zero
the bitmap, only to immediately free it.

> Also, maybe a comment about not needing the bitmap lock because this is
> during teardown.

Ya, I'll add that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ