[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230516061309.GA7219@lst.de>
Date: Tue, 16 May 2023 08:13:09 +0200
From: Christoph Hellwig <hch@....de>
To: "Michael Kelley (LINUX)" <mikelley@...rosoft.com>
Cc: Petr Tesarik <petrtesarik@...weicloud.com>,
Jonathan Corbet <corbet@....net>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Borislav Petkov <bp@...e.de>,
Randy Dunlap <rdunlap@...radead.org>,
Catalin Marinas <catalin.marinas@....com>,
Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
Kim Phillips <kim.phillips@....com>,
"Steven Rostedt (Google)" <rostedt@...dmis.org>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Hans de Goede <hdegoede@...hat.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Kees Cook <keescook@...omium.org>,
Thomas Gleixner <tglx@...utronix.de>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:DRM DRIVERS" <dri-devel@...ts.freedesktop.org>,
"open list:DMA MAPPING HELPERS" <iommu@...ts.linux.dev>,
Roberto Sassu <roberto.sassu@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
"petr@...arici.cz" <petr@...arici.cz>
Subject: Re: [PATCH v2 RESEND 4/7] swiotlb: Dynamically allocated bounce
buffers
On Mon, May 15, 2023 at 07:43:52PM +0000, Michael Kelley (LINUX) wrote:
> FWIW, I don't think the approach you have implemented here will be
> practical to use for CoCo VMs (SEV, TDX, whatever else). The problem
> is that dma_direct_alloc_pages() and dma_direct_free_pages() must
> call dma_set_decrypted() and dma_set_encrypted(), respectively. In CoCo
> VMs, these calls are expensive because they require a hypercall to the host,
> and the operation on the host isn't trivial either. I haven't measured the
> overhead, but doing a hypercall on every DMA map operation and on
> every unmap operation has long been something we thought we must
> avoid. The fixed swiotlb bounce buffer space solves this problem by
> doing set_decrypted() in batch at boot time, and never
> doing set_encrypted().
I also suspect it doesn't really scale too well due to the number of
allocations. I suspect a better way to implement things would be to
add more large chunks that are used just like the main swiotlb buffers.
That is when we run out of space try to allocate another chunk of the
same size in the background, similar to what we do with the pool in
dma-pool.c. This means we'll do a fairly large allocation, so we'll
need compaction or even CMA to back it up, but the other big upside
is that it also reduces the number of buffers that need to be checked
in is_swiotlb_buffer or the free / sync side.
Powered by blists - more mailing lists