[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250801145305.GB26511@ziepe.ca>
Date: Fri, 1 Aug 2025 11:53:05 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: Suzuki K Poulose <suzuki.poulose@....com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
linux-coco@...ts.linux.dev, kvmarm@...ts.linux.dev,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
aik@....com, lukas@...ner.de, Samuel Ortiz <sameo@...osinc.com>,
Xu Yilun <yilun.xu@...ux.intel.com>,
Steven Price <steven.price@....com>,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>
Subject: Re: [RFC PATCH v1 04/38] tsm: Support DMA Allocation from private
memory
On Fri, Aug 01, 2025 at 10:30:35AM +0100, Suzuki K Poulose wrote:
> > Is there a reason not to just dump that into the T=0 SMMU using 1G
> > huge pages and never touch it again? The GPT provides protection?
>
> That is possible, once we get guest_memfd mmap support merged upstream.
> GPT does provide protection. The only caveat is, does the guest_memfd
> support this at all ? i.e., shared->private transitions with a shared
> mapping in place (Though this is in SMMU only, not the Host CPU
> pagetables)
I don't know, we haven't got to the guestmemfd/IOMMU integration yet,
which is why I ask the questions.
I think AMD and ARM would both be interested in guestmemfd <-> iommu
working this way, at least.
> I think we can go ahead with VMM pre-populating the entire DRAM
> and keeping it pinned for DA. Rather than doing this from the
> vfio kernel, it could be done by the VMM as it has better knowledge
> of the populated contents and map the rest as "unmeasured" 0s.
Yes, if done it should be done by the VMM and run through
guestmemfd/kvm however that is agreed to.
Jason
Powered by blists - more mailing lists