[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGtprH9OJpj_iUbgjVSjCnqqpWt3XiMT6Xg5PtywEf9b-iF-1A@mail.gmail.com>
Date: Wed, 21 Jun 2023 02:01:16 -0700
From: Vishal Annapurve <vannapurve@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Ackerley Tng <ackerleytng@...gle.com>, akpm@...ux-foundation.org,
muchun.song@...ux.dev, pbonzini@...hat.com, seanjc@...gle.com,
shuah@...nel.org, willy@...radead.org, brauner@...nel.org,
chao.p.peng@...ux.intel.com, coltonlewis@...gle.com,
david@...hat.com, dhildenb@...hat.com, dmatlack@...gle.com,
erdemaktas@...gle.com, hughd@...gle.com, isaku.yamahata@...il.com,
jarkko@...nel.org, jmattson@...gle.com, joro@...tes.org,
jthoughton@...gle.com, jun.nakajima@...el.com,
kirill.shutemov@...ux.intel.com, liam.merwick@...cle.com,
mail@...iej.szmigiero.name, mhocko@...e.com, michael.roth@....com,
qperret@...gle.com, rientjes@...gle.com, rppt@...nel.org,
steven.price@....com, tabba@...gle.com, vbabka@...e.cz,
vipinsh@...gle.com, vkuznets@...hat.com, wei.w.wang@...el.com,
yu.c.zhang@...ux.intel.com, kvm@...r.kernel.org,
linux-api@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
linux-mm@...ck.org, qemu-devel@...gnu.org, x86@...nel.org
Subject: Re: [RFC PATCH 00/19] hugetlb support for KVM guest_mem
On Fri, Jun 16, 2023 at 11:28 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>
> On 06/06/23 19:03, Ackerley Tng wrote:
> > Hello,
> >
> > This patchset builds upon a soon-to-be-published WIP patchset that Sean
> > published at https://github.com/sean-jc/linux/tree/x86/kvm_gmem_solo, mentioned
> > at [1].
> >
> > The tree can be found at:
> > https://github.com/googleprodkernel/linux-cc/tree/gmem-hugetlb-rfc-v1
> >
> > In this patchset, hugetlb support for KVM's guest_mem (aka gmem) is introduced,
> > allowing VM private memory (for confidential computing) to be backed by hugetlb
> > pages.
> >
> > guest_mem provides userspace with a handle, with which userspace can allocate
> > and deallocate memory for confidential VMs without mapping the memory into
> > userspace.
>
> Hello Ackerley,
>
> I am not sure if you are aware or, have been following the hugetlb HGM
> discussion in this thread:
> https://lore.kernel.org/linux-mm/20230306191944.GA15773@monkey/
>
> There we are trying to decide if HGM should be added to hugetlb, or if
> perhaps a new filesystem/driver/allocator should be created. The concern
> is added complexity to hugetlb as well as core mm special casing. Note
> that HGM is addressing issues faced by existing hugetlb users.
>
> Your proposal here suggests modifying hugetlb so that it can be used in
> a new way (use case) by KVM's guest_mem. As such it really seems like
> something that should be done in a separate filesystem/driver/allocator.
> You will likely not get much support for modifying hugetlb.
>
> --
> Mike Kravetz
>
IIUC mm/hugetlb.c implements memory manager for Hugetlb pages and
fd/hugetlbfs/inode.c implements the filesystem logic for hugetlbfs.
This series implements a new filesystem with limited operations
parallel to hugetlbfs filesystem but tries to reuse hugetlb memory
manager. The effort here is to not add any new feature to hugetlb
memory manager but clean it up so that it can be used by a new
filesystem.
guest_mem warrants a new filesystem since it supports limited
operations on the underlying files but there is no additional
restriction on underlying memory management. Though one could argue
that memory management for guest_mem files can be a very simple one
that goes inline with limited operations on the files.
If this series were to go a separate way of implementing a new memory
manager, one immediate requirement that might spring up, would be to
convert memory from hugetlb managed memory to be managed by this newly
introduced memory manager and vice a versa at runtime since there
could be a mix of VMs on the same platform using guest_mem and
hugetlb.
Maybe this can be satisfied by having a separate global pool for
reservation that's consumed by both, which would need more changes in
my understanding.
Using guest_mem for all the VMs by default would be a future work
contingent on all existing usecases/requirements being satisfied.
Regards,
Vishal
Powered by blists - more mailing lists