lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 06 May 2022 12:11:03 +1200
From:   Kai Huang <kai.huang@...el.com>
To:     Dave Hansen <dave.hansen@...el.com>,
        Sathyanarayanan Kuppuswamy 
        <sathyanarayanan.kuppuswamy@...ux.intel.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
        "H . Peter Anvin" <hpa@...or.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Tony Luck <tony.luck@...el.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Wander Lairson Costa <wander@...hat.com>,
        Isaku Yamahata <isaku.yamahata@...il.com>,
        marcelo.cerri@...onical.com, tim.gardner@...onical.com,
        khalid.elmously@...onical.com, philip.cox@...onical.com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 3/3] x86/tdx: Add Quote generation support

On Thu, 2022-05-05 at 16:06 -0700, Dave Hansen wrote:
> On 5/5/22 15:15, Kai Huang wrote:
> > set_memory_xx()  is supposedly only for direct-mapping.  Please use my
> > suggestion above.
> 
> Kai, please take a look at some of the other users, especially
> set_memory_x().  See how long the "supposed" requirement holds up.

Right I should not have used "supposed".  My bad.  I got the impression by
roughly looking at set_memory_{uc|wc..}().  Looks they can only work on direct
mapaping as they internally uses __pa():

int set_memory_wc(unsigned long addr, int numpages)                            
{
        int ret;                                                               

        ret = memtype_reserve(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,   
                _PAGE_CACHE_MODE_WC, NULL);                                    
        if (ret)
                return ret;                                                    

        ret = _set_memory_wc(addr, numpages);                                  
        if (ret)
                memtype_free(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);   

        return ret;                                                            
}

Don't all set_memory_xxx() functions have the same schematic?

> 
> That said, I've forgotten by now if this _could_ have used vmalloc() or
> vmap() or vmap_pfn().  None of the logic about why or how the allocator
> and mapping design decisions were made.  Could that be be rectified for
> the next post?

Looking at set_memory_{encrypted|decrypted}() again, it seems they currently
only works on direct mapping for TDX (as sathya's code has showed).  For AMD it
appears they can work on any virtual address as AMD uses lookup_address() to
find the PFN. 

So if the two are supposed to work on any virtual address, then it makes sense
to fix at TDX side.

Btw, regarding to my suggestion of using vmap() with prot_decrypted() +
MapGPA(), after thinking again, I think there is also a problem -- the TLB for
private mapping and the cache are not flushed.  So looks we should fix
set_memory_decrypted() to work on any virtual address and use it for vmap().

Back to the "why and how the allocator and mapping design decisions were made",
let me summarize options and my preference below:

1) Using DMA API.  This guarantees for TDX1.0 the allocated buffer is shared
(set_memory_decrypted() is called for swiotlb).  But this may not guarantee the
buffer is shared in future generation of TDX.  This of course depends on how we
are going to change those DMA API implementations for future TDX but
conceptually using DMA API is more like for convenience purpose.  Also, in order
to use DMA API, we need more code to handle additional 'platform device' which
is not mandatory for attestation driver.

2) Using vmap() + set_memory_decrypted().  This requires to change the latter to
support any virtual address for TDX.  But now I guess it's the right way since
it's better to have some infrastructure to convert private page to shared
besides DMA API anyway.

3) Using vmap() + MapGPA().  This requires additional work on TLB flush and
cache flush.  Now I think we should not use this.

Given above, I personally think 2) is better.

Kirill, what's your opinion?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ