[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211119151943.GH876299@ziepe.ca>
Date: Fri, 19 Nov 2021 11:19:43 -0400
From: Jason Gunthorpe <jgg@...pe.ca>
To: Chao Peng <chao.p.peng@...ux.intel.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
qemu-devel@...gnu.org, Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
luto@...nel.org, john.ji@...el.com, susie.li@...el.com,
jun.nakajima@...el.com, dave.hansen@...el.com, ak@...ux.intel.com,
david@...hat.com
Subject: Re: [RFC v2 PATCH 01/13] mm/shmem: Introduce F_SEAL_GUEST
On Fri, Nov 19, 2021 at 09:47:27PM +0800, Chao Peng wrote:
> From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
>
> The new seal type provides semantics required for KVM guest private
> memory support. A file descriptor with the seal set is going to be used
> as source of guest memory in confidential computing environments such as
> Intel TDX and AMD SEV.
>
> F_SEAL_GUEST can only be set on empty memfd. After the seal is set
> userspace cannot read, write or mmap the memfd.
>
> Userspace is in charge of guest memory lifecycle: it can allocate the
> memory with falloc or punch hole to free memory from the guest.
>
> The file descriptor passed down to KVM as guest memory backend. KVM
> register itself as the owner of the memfd via memfd_register_guest().
>
> KVM provides callback that needed to be called on fallocate and punch
> hole.
>
> memfd_register_guest() returns callbacks that need be used for
> requesting a new page from memfd.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Signed-off-by: Chao Peng <chao.p.peng@...ux.intel.com>
> include/linux/memfd.h | 24 ++++++++
> include/linux/shmem_fs.h | 9 +++
> include/uapi/linux/fcntl.h | 1 +
> mm/memfd.c | 33 +++++++++-
> mm/shmem.c | 123 ++++++++++++++++++++++++++++++++++++-
> 5 files changed, 186 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/memfd.h b/include/linux/memfd.h
> index 4f1600413f91..ff920ef28688 100644
> +++ b/include/linux/memfd.h
> @@ -4,13 +4,37 @@
>
> #include <linux/file.h>
>
> +struct guest_ops {
> + void (*invalidate_page_range)(struct inode *inode, void *owner,
> + pgoff_t start, pgoff_t end);
> + void (*fallocate)(struct inode *inode, void *owner,
> + pgoff_t start, pgoff_t end);
> +};
> +
> +struct guest_mem_ops {
> + unsigned long (*get_lock_pfn)(struct inode *inode, pgoff_t offset,
> + bool alloc, int *order);
> + void (*put_unlock_pfn)(unsigned long pfn);
> +
> +};
Ignoring confidential compute for a moment
If qmeu can put all the guest memory in a memfd and not map it, then
I'd also like to see that the IOMMU can use this interface too so we
can have VFIO working in this configuration.
As designed the above looks useful to import a memfd to a VFIO
container but could you consider some more generic naming than calling
this 'guest' ?
Along the same lines, to support fast migration, we'd want to be able
to send these things to the RDMA subsytem as well so we can do data
xfer. Very similar to VFIO.
Also, shouldn't this be two patches? F_SEAL is not really related to
these acessors, is it?
> +extern inline int memfd_register_guest(struct inode *inode, void *owner,
> + const struct guest_ops *guest_ops,
> + const struct guest_mem_ops **guest_mem_ops);
Why does this take an inode and not a file *?
> +int shmem_register_guest(struct inode *inode, void *owner,
> + const struct guest_ops *guest_ops,
> + const struct guest_mem_ops **guest_mem_ops)
> +{
> + struct shmem_inode_info *info = SHMEM_I(inode);
> +
> + if (!owner)
> + return -EINVAL;
> +
> + if (info->guest_owner && info->guest_owner != owner)
> + return -EPERM;
And this looks like it means only a single subsytem can use this API
at once, not so nice..
Jason
Powered by blists - more mailing lists