lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGtprH-7SYCBjrck2k7vTtHrWbkdhkOicuM9Yz900xuKHMh1vA@mail.gmail.com>
Date: Sat, 24 Feb 2024 22:37:19 +0530
From: Vishal Annapurve <vannapurve@...gle.com>
To: Michael Kelley <mhklinux@...look.com>
Cc: Alexander Graf <graf@...zon.com>, "Kirill A. Shutemov" <kirill@...temov.name>, 
	"x86@...nel.org" <x86@...nel.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "pbonzini@...hat.com" <pbonzini@...hat.com>, 
	"rientjes@...gle.com" <rientjes@...gle.com>, "seanjc@...gle.com" <seanjc@...gle.com>, 
	"erdemaktas@...gle.com" <erdemaktas@...gle.com>, "ackerleytng@...gle.com" <ackerleytng@...gle.com>, 
	"jxgao@...gle.com" <jxgao@...gle.com>, "sagis@...gle.com" <sagis@...gle.com>, 
	"oupton@...gle.com" <oupton@...gle.com>, "peterx@...hat.com" <peterx@...hat.com>, 
	"vkuznets@...hat.com" <vkuznets@...hat.com>, "dmatlack@...gle.com" <dmatlack@...gle.com>, 
	"pgonda@...gle.com" <pgonda@...gle.com>, "michael.roth@....com" <michael.roth@....com>, 
	"thomas.lendacky@....com" <thomas.lendacky@....com>, 
	"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>, 
	"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>, 
	"chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>, 
	"isaku.yamahata@...il.com" <isaku.yamahata@...il.com>, 
	"andrew.jones@...ux.dev" <andrew.jones@...ux.dev>, "corbet@....net" <corbet@....net>, "hch@....de" <hch@....de>, 
	"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>, "rostedt@...dmis.org" <rostedt@...dmis.org>, 
	"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>
Subject: Re: [RFC V1 1/5] swiotlb: Support allocating DMA memory from SWIOTLB

On Fri, Feb 16, 2024 at 1:56 AM Michael Kelley <mhklinux@...look.com> wrote:
>
> From: Alexander Graf <graf@...zon.com> Sent: Thursday, February 15, 2024 1:44 AM
> >
> > On 15.02.24 04:33, Vishal Annapurve wrote:
> > > On Wed, Feb 14, 2024 at 8:20 PM Kirill A. Shutemov
> > <kirill@...temov.name> wrote:
> > >> On Fri, Jan 12, 2024 at 05:52:47AM +0000, Vishal Annapurve wrote:
> > >>> Modify SWIOTLB framework to allocate DMA memory always from SWIOTLB.
> > >>>
> > >>> CVMs use SWIOTLB buffers for bouncing memory when using dma_map_* APIs
> > >>> to setup memory for IO operations. SWIOTLB buffers are marked as shared
> > >>> once during early boot.
> > >>>
> > >>> Buffers allocated using dma_alloc_* APIs are allocated from kernel memory
> > >>> and then converted to shared during each API invocation. This patch ensures
> > >>> that such buffers are also allocated from already shared SWIOTLB
> > >>> regions. This allows enforcing alignment requirements on regions marked
> > >>> as shared.
> > >> But does it work in practice?
> > >>
> > >> Some devices (like GPUs) require a lot of DMA memory. So with this approach
> > >> we would need to have huge SWIOTLB buffer that is unused in most VMs.
> > >>
> > > Current implementation limits the size of statically allocated SWIOTLB
> > > memory pool to 1G. I was proposing to enable dynamic SWIOTLB for CVMs
> > > in addition to aligning the memory allocations to hugepage sizes, so
> > > that the SWIOTLB pool can be scaled up on demand.
> > >
>
> Vishal --
>
> When the dynamic swiotlb mechanism tries to grow swiotlb space
> by adding another pool, it gets the additional memory as a single
> physically contiguous chunk using alloc_pages().   It starts by trying
> to allocate a chunk the size of the original swiotlb size, and if that
> fails, halves the size until it gets a size where the allocation succeeds.
> The minimum size is 1 Mbyte, and if that fails, the "grow" fails.
>

Thanks for pointing this out.

> So it seems like dynamic swiotlb is subject to the almost the same
> memory fragmentation limitations as trying to allocate huge pages.
> swiotlb needs a minimum of 1 Mbyte contiguous in order to grow,
> while huge pages need 2 Mbytes, but either is likely to be
> problematic in a VM that has been running a while.  With that
> in mind, I'm not clear on the benefit of enabling dynamic swiotlb.
> It seems like it just moves around the problem of needing high order
> contiguous memory allocations.  Or am I missing something?
>

Currently the SWIOTLB pool is limited to 1GB in size.  Kirill has
pointed out that devices like GPUs could need a significant amount of
memory to be allocated from the SWIOTLB pool. Without dynamic SWIOTLB,
such devices run the risk of memory exhaustion without any hope of
recovery.

In addition, I am proposing to have dma_alloc_* APIs to use the
SWIOTLB area as well, adding to the memory pressure. If there was a
way to calculate the maximum amount of memory needed for all dma
allocations for all possible devices used by CoCo VMs then one can use
that number to preallocate SWIOTLB pool. I am arguing that calculating
the maximum bound would be difficult and instead of trying to
calculate it, allowing SWIOTLB to scale dynamically would be better
since it provides better .

So if the above argument for enabling dynamic SWIOTLB makes sense then
it should be relatively easy to append hugepage alignment restrictions
for SWIOTLB pool increments (inline with the fact that 2MB vs 1MB size
allocations are nearly equally prone to failure due to memory
fragmentation).

> Michael
>
> > > The issue with aligning the pool areas to hugepages is that hugepage
> > > allocation at runtime is not guaranteed. Guaranteeing the hugepage
> > > allocation might need calculating the upper bound in advance, which
> > > defeats the purpose of enabling dynamic SWIOTLB. I am open to
> > > suggestions here.
> >
> >
> > You could allocate a max bound at boot using CMA and then only fill into
> > the CMA area when SWIOTLB size requirements increase? The CMA region
> > will allow movable allocations as long as you don't require the CMA space.
> >
> >
> > Alex
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ