[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230407060047.GE6803@lst.de>
Date: Fri, 7 Apr 2023 08:00:47 +0200
From: Christoph Hellwig <hch@....de>
To: Petr Tesarik <petrtesarik@...weicloud.com>
Cc: Jonathan Corbet <corbet@....net>, Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Borislav Petkov <bp@...e.de>,
"Paul E. McKenney" <paulmck@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Randy Dunlap <rdunlap@...radead.org>,
Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
Kim Phillips <kim.phillips@....com>,
"Steven Rostedt (Google)" <rostedt@...dmis.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:DMA MAPPING HELPERS" <iommu@...ts.linux.dev>,
Roberto Sassu <roberto.sassu@...wei.com>, petr@...arici.cz
Subject: Re: [RFC v1 0/4] Allow dynamic allocation of software IO TLB
bounce buffers
On Mon, Mar 27, 2023 at 01:06:34PM +0200, Petr Tesarik wrote:
> B. Allocate a very big SWIOTLB, but allow to use it for normal
> allocations (similar to the CMA approach). The advantage is that there
> is only one table, pushing performance impact down to almost zero. The
> main challenge is migrating pages to/from the SWIOTLB. Existing CMA code
> cannot be reused, because CMA cannot be used from atomic contexts,
> unlike SWIOTLB.
That actually sounds very interesting, although I'd go futher and
figure out if we:
a) could get away to only allow the CMA allocation for sleeping contexts,
if we have enough sleeping context to matter
b) check with the CMA maintainers if it is feasible and acceptable
to them to extent CMA for irq allocations.
That being said, I think cases like dma-buf sharing really need to
be addressed at a higher level instead of basically double allocating
these long-term memory allocations.
I'd also really love to hear some feedback from the various confidential
computing implementors, as that seems to be ther big driving force for
swiotlb currently.
Powered by blists - more mailing lists