lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6827969540b5d_345b8829485@iweiny-mobl.notmuch>
Date: Fri, 16 May 2025 14:48:37 -0500
From: Ira Weiny <ira.weiny@...el.com>
To: Ackerley Tng <ackerleytng@...gle.com>, <kvm@...r.kernel.org>,
	<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>, <x86@...nel.org>,
	<linux-fsdevel@...r.kernel.org>, <afranji@...gle.com>
CC: <ackerleytng@...gle.com>, <aik@....com>, <ajones@...tanamicro.com>,
	<akpm@...ux-foundation.org>, <amoorthy@...gle.com>,
	<anthony.yznaga@...cle.com>, <anup@...infault.org>, <aou@...s.berkeley.edu>,
	<bfoster@...hat.com>, <binbin.wu@...ux.intel.com>, <brauner@...nel.org>,
	<catalin.marinas@....com>, <chao.p.peng@...el.com>, <chenhuacai@...nel.org>,
	<dave.hansen@...el.com>, <david@...hat.com>, <dmatlack@...gle.com>,
	<dwmw@...zon.co.uk>, <erdemaktas@...gle.com>, <fan.du@...el.com>,
	<fvdl@...gle.com>, <graf@...zon.com>, <haibo1.xu@...el.com>,
	<hch@...radead.org>, <hughd@...gle.com>, <ira.weiny@...el.com>,
	<isaku.yamahata@...el.com>, <jack@...e.cz>, <james.morse@....com>,
	<jarkko@...nel.org>, <jgg@...pe.ca>, <jgowans@...zon.com>,
	<jhubbard@...dia.com>, <jroedel@...e.de>, <jthoughton@...gle.com>,
	<jun.miao@...el.com>, <kai.huang@...el.com>, <keirf@...gle.com>,
	<kent.overstreet@...ux.dev>, <kirill.shutemov@...el.com>,
	<liam.merwick@...cle.com>, <maciej.wieczor-retman@...el.com>,
	<mail@...iej.szmigiero.name>, <maz@...nel.org>, <mic@...ikod.net>,
	<michael.roth@....com>, <mpe@...erman.id.au>, <muchun.song@...ux.dev>,
	<nikunj@....com>, <nsaenz@...zon.es>, <oliver.upton@...ux.dev>,
	<palmer@...belt.com>, <pankaj.gupta@....com>, <paul.walmsley@...ive.com>,
	<pbonzini@...hat.com>, <pdurrant@...zon.co.uk>, <peterx@...hat.com>,
	<pgonda@...gle.com>, <pvorel@...e.cz>, <qperret@...gle.com>,
	<quic_cvanscha@...cinc.com>, <quic_eberman@...cinc.com>,
	<quic_mnalajal@...cinc.com>, <quic_pderrin@...cinc.com>,
	<quic_pheragu@...cinc.com>, <quic_svaddagi@...cinc.com>,
	<quic_tsoni@...cinc.com>, <richard.weiyang@...il.com>,
	<rick.p.edgecombe@...el.com>, <rientjes@...gle.com>, <roypat@...zon.co.uk>,
	<rppt@...nel.org>, <seanjc@...gle.com>, <shuah@...nel.org>,
	<steven.price@....com>, <steven.sistare@...cle.com>,
	<suzuki.poulose@....com>, <tabba@...gle.com>, <thomas.lendacky@....com>,
	<usama.arif@...edance.com>, <vannapurve@...gle.com>, <vbabka@...e.cz>,
	<viro@...iv.linux.org.uk>, <vkuznets@...hat.com>, <wei.w.wang@...el.com>,
	<will@...nel.org>, <willy@...radead.org>, <xiaoyao.li@...el.com>,
	<yan.y.zhao@...el.com>, <yilun.xu@...el.com>, <yuzenghui@...wei.com>,
	<zhiquan1.li@...el.com>
Subject: Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd

Ackerley Tng wrote:
> Hello,
> 
> This patchset builds upon discussion at LPC 2024 and many guest_memfd
> upstream calls to provide 1G page support for guest_memfd by taking
> pages from HugeTLB.
> 
> This patchset is based on Linux v6.15-rc6, and requires the mmap support
> for guest_memfd patchset (Thanks Fuad!) [1].

Trying to manage dependencies I find that Ryan's just released series[1]
is required to build this set.

[1] https://lore.kernel.org/all/cover.1747368092.git.afranji@google.com/

Specifically this patch:
	https://lore.kernel.org/all/1f42c32fc18d973b8ec97c8be8b7cd921912d42a.1747368092.git.afranji@google.com/

	defines

	alloc_anon_secure_inode()

Am I wrong in that?

> 
> For ease of testing, this series is also available, stitched together,
> at https://github.com/googleprodkernel/linux-cc/tree/gmem-1g-page-support-rfc-v2
> 

I went digging in your git tree and then found Ryan's set.  So thanks for
the git tree.  :-D

However, it seems this add another dependency which should be managed in
David's email of dependencies?

Ira

> This patchset can be divided into two sections:
> 
> (a) Patches from the beginning up to and including "KVM: selftests:
>     Update script to map shared memory from guest_memfd" are a modified
>     version of "conversion support for guest_memfd", which Fuad is
>     managing [2].
> 
> (b) Patches after "KVM: selftests: Update script to map shared memory
>     from guest_memfd" till the end are patches that actually bring in 1G
>     page support for guest_memfd.
> 
> These are the significant differences between (a) and [2]:
> 
> + [2] uses an xarray to track sharability, but I used a maple tree
>   because for 1G pages, iterating pagewise to update shareability was
>   prohibitively slow even for testing. I was choosing from among
>   multi-index xarrays, interval trees and maple trees [3], and picked
>   maple trees because
>     + Maple trees were easier to figure out since I didn't have to
>       compute the correct multi-index order and handle edge cases if the
>       converted range wasn't a neat power of 2.
>     + Maple trees were easier to figure out as compared to updating
>       parts of a multi-index xarray.
>     + Maple trees had an easier API to use than interval trees.
> + [2] doesn't yet have a conversion ioctl, but I needed it to test 1G
>   support end-to-end.
> + (a) Removes guest_memfd from participating in LRU, which I needed, to
>   get conversion selftests to work as expected, since participation in
>   LRU was causing some unexpected refcounts on folios which was blocking
>   conversions.
> 
> I am sending (a) in emails as well, as opposed to just leaving it on
> GitHub, so that we can discuss by commenting inline on emails. If you'd
> like to just look at 1G page support, here are some key takeaways from
> the first section (a):
> 
> + If GUEST_MEMFD_FLAG_SUPPORT_SHARED is requested during guest_memfd
>   creation, guest_memfd will
>     + Track shareability (whether an index in the inode is guest-only or
>       if the host is allowed to fault memory at a given index).
>     + Always be used for guest faults - specifically, kvm_gmem_get_pfn()
>       will be used to provide pages for the guest.
>     + Always be used by KVM to check private/shared status of a gfn.
> + guest_memfd now has conversion ioctls, allowing conversion to
>   private/shared
>     + Conversion can fail if there are unexpected refcounts on any
>       folios in the range.
> 
> Focusing on (b) 1G page support, here's an overview:
> 
> 1. A bunch of refactoring patches for HugeTLB that isolates the
>    allocation of a HugeTLB folio from other HugeTLB concepts such as
>    VMA-level reservations, and HugeTLBfs-specific concepts, such as
>    where memory policy is stored in the VMA, or where the subpool is
>    stored on the inode.
> 2. A few patches that add a guestmem_hugetlb allocator within mm/. The
>    guestmem_hugetlb allocator is a wrapper around HugeTLB to modularize
>    the memory management functions, and to cleanly handle cleanup, so
>    that folio cleanup can happen after the guest_memfd inode (and even
>    KVM) goes away.
> 3. Some updates to guest_memfd to use the guestmem_hugetlb allocator.
> 4. Selftests for 1G page support.
> 
> Here are some remaining issues/TODOs:
> 
> 1. Memory error handling such as machine check errors have not been
>    implemented.
> 2. I've not looked into preparedness of pages, only zeroing has been
>    considered.
> 3. When allocating HugeTLB pages, if two threads allocate indices
>    mapping to the same huge page, the utilization in guest_memfd inode's
>    subpool may momentarily go over the subpool limit (the requested size
>    of the inode at guest_memfd creation time), causing one of the two
>    threads to get -ENOMEM. Suggestions to solve this are appreciated!
> 4. max_usage_in_bytes statistic (cgroups v1) for guest_memfd HugeTLB
>    pages should be correct but needs testing and could be wrong.
> 5. memcg charging (charge_memcg()) for cgroups v2 for guest_memfd
>    HugeTLB pages after splitting should be correct but needs testing and
>    could be wrong.
> 6. Page cache accounting: When a hugetlb page is split, guest_memfd will
>    incur page count in both NR_HUGETLB (counted at hugetlb allocation
>    time) and NR_FILE_PAGES stats (counted when split pages are added to
>    the filemap). Is this aligned with what people expect?
> 
> Here are some optimizations that could be explored in future series:
> 
> 1. Pages could be split from 1G to 2M first and only split to 4K if
>    necessary.
> 2. Zeroing could be skipped for Coco VMs if hardware already zeroes the
>    pages.
> 
> Here's RFC v1 [4] if you're interested in the motivation behind choosing
> HugeTLB, or the history of this patch series.
> 
> [1] https://lore.kernel.org/all/20250513163438.3942405-11-tabba@google.com/T/
> [2] https://lore.kernel.org/all/20250328153133.3504118-1-tabba@google.com/T/
> [3] https://lore.kernel.org/all/diqzzfih8q7r.fsf@ackerleytng-ctop.c.googlers.com/
> [4] https://lore.kernel.org/all/cover.1726009989.git.ackerleytng@google.com/T/
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ