lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201123175632.GA21539@char.us.oracle.com>
Date:   Mon, 23 Nov 2020 12:56:32 -0500
From:   Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:     Borislav Petkov <bp@...en8.de>
Cc:     Ashish Kalra <Ashish.Kalra@....com>, hch@....de,
        tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
        x86@...nel.org, luto@...nel.org, peterz@...radead.org,
        dave.hansen@...ux-intel.com, iommu@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org, brijesh.singh@....com,
        Thomas.Lendacky@....com, jon.grimm@....com, rientjes@...gle.com
Subject: Re: [PATCH v6] swiotlb: Adjust SWIOTBL bounce buffer size for SEV
 guests.

On Mon, Nov 23, 2020 at 06:06:47PM +0100, Borislav Petkov wrote:
> On Thu, Nov 19, 2020 at 09:42:05PM +0000, Ashish Kalra wrote:
> > From: Ashish Kalra <ashish.kalra@....com>
> > 
> > For SEV, all DMA to and from guest has to use shared (un-encrypted) pages.
> > SEV uses SWIOTLB to make this happen without requiring changes to device
> > drivers.  However, depending on workload being run, the default 64MB of
> > SWIOTLB might not be enough and SWIOTLB may run out of buffers to use
> > for DMA, resulting in I/O errors and/or performance degradation for
> > high I/O workloads.
> > 
> > Increase the default size of SWIOTLB for SEV guests using a minimum
> > value of 128MB and a maximum value of 512MB, determining on amount
> > of provisioned guest memory.
> 
> That sentence needs massaging.
> 
> > Using late_initcall() interface to invoke swiotlb_adjust() does not
> > work as the size adjustment needs to be done before mem_encrypt_init()
> > and reserve_crashkernel() which use the allocated SWIOTLB buffer size,
> > hence calling it explicitly from setup_arch().
> 
> "hence call it ... "
> 
> > 
> > The SWIOTLB default size adjustment is added as an architecture specific
> 
> "... is added... " needs to be "Add ..."
> 
> > interface/callback to allow architectures such as those supporting memory
> > encryption to adjust/expand SWIOTLB size for their use.
> > 
> > v5 fixed build errors and warnings as
> > Reported-by: kbuild test robot <lkp@...el.com>
> > 
> > Signed-off-by: Ashish Kalra <ashish.kalra@....com>
> > ---
> >  arch/x86/kernel/setup.c   |  2 ++
> >  arch/x86/mm/mem_encrypt.c | 32 ++++++++++++++++++++++++++++++++
> >  include/linux/swiotlb.h   |  6 ++++++
> >  kernel/dma/swiotlb.c      | 24 ++++++++++++++++++++++++
> >  4 files changed, 64 insertions(+)
> > 
> > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> > index 3511736fbc74..b073d58dd4a3 100644
> > --- a/arch/x86/kernel/setup.c
> > +++ b/arch/x86/kernel/setup.c
> > @@ -1166,6 +1166,8 @@ void __init setup_arch(char **cmdline_p)
> >  	if (boot_cpu_has(X86_FEATURE_GBPAGES))
> >  		hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >  
> > +	swiotlb_adjust();
> > +
> >  	/*
> >  	 * Reserve memory for crash kernel after SRAT is parsed so that it
> >  	 * won't consume hotpluggable memory.
> > diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> > index 3f248f0d0e07..c79a0d761db5 100644
> > --- a/arch/x86/mm/mem_encrypt.c
> > +++ b/arch/x86/mm/mem_encrypt.c
> > @@ -490,6 +490,38 @@ static void print_mem_encrypt_feature_info(void)
> >  }
> >  
> >  /* Architecture __weak replacement functions */
> > +unsigned long __init arch_swiotlb_adjust(unsigned long iotlb_default_size)
> > +{
> > +	unsigned long size = 0;
> 
> 	unsigned long size = iotlb_default_size;
> 
> > +
> > +	/*
> > +	 * For SEV, all DMA has to occur via shared/unencrypted pages.
> > +	 * SEV uses SWOTLB to make this happen without changing device
> > +	 * drivers. However, depending on the workload being run, the
> > +	 * default 64MB of SWIOTLB may not be enough & SWIOTLB may
> 						     ^
> 
> Use words pls, not "&".
> 
> 
> > +	 * run out of buffers for DMA, resulting in I/O errors and/or
> > +	 * performance degradation especially with high I/O workloads.
> > +	 * Increase the default size of SWIOTLB for SEV guests using
> > +	 * a minimum value of 128MB and a maximum value of 512MB,
> > +	 * depending on amount of provisioned guest memory.
> > +	 */
> > +	if (sev_active()) {
> > +		phys_addr_t total_mem = memblock_phys_mem_size();
> > +
> > +		if (total_mem <= SZ_1G)
> > +			size = max(iotlb_default_size, (unsigned long) SZ_128M);
> > +		else if (total_mem <= SZ_4G)
> > +			size = max(iotlb_default_size, (unsigned long) SZ_256M);

That is eating 128MB for 1GB, aka 12% of the guest memory allocated statically for this.

And for guests that are 2GB, that is 12% until it gets to 3GB when it is 8%
and then 6% at 4GB.

I would prefer this to be based on your memory count, that is 6% of total
memory. And then going forward we can allocate memory _after_ boot and then stich
the late SWIOTLB pool and allocate on demand.



> > +		else
> > +			size = max(iotlb_default_size, (unsigned long) SZ_512M);
> > +
> > +		pr_info("SWIOTLB bounce buffer size adjusted to %luMB for SEV platform",
> 
> just "... for SEV" - no need for "platform".
> 
> ...
> 
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index c19379fabd20..3be9a19ea0a5 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -163,6 +163,30 @@ unsigned long swiotlb_size_or_default(void)
> >  	return size ? size : (IO_TLB_DEFAULT_SIZE);
> >  }
> >  
> > +unsigned long __init __weak arch_swiotlb_adjust(unsigned long size)
> > +{
> > +	return 0;
> 
> That, of course, needs to return size, not 0.

This is not going to work for TDX. I think having a registration
to SWIOTLB to have this function would be better going forward.

As in there will be a swiotlb_register_adjuster() which AMD SEV
code can call at start, also TDX can do it (and other platforms).


> 
> > +}
> > +
> > +void __init swiotlb_adjust(void)
> > +{
> > +	unsigned long size;
> > +
> > +	/*
> > +	 * If swiotlb parameter has not been specified, give a chance to
> > +	 * architectures such as those supporting memory encryption to
> > +	 * adjust/expand SWIOTLB size for their use.
> > +	 */
> 
> And when you preset the function-local argument "size" with the size
> coming in as the size argument of arch_swiotlb_adjust()...
> 
> > +	if (!io_tlb_nslabs) {
> > +		size = arch_swiotlb_adjust(IO_TLB_DEFAULT_SIZE);
> > +		if (size) {
> 
> ... you don't have to do if (size) here either but simply use size to
> compute io_tlb_nslabs, I'd say.
> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ