[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z0ykBZAOZUdf8GbB@x1n>
Date: Sun, 1 Dec 2024 12:59:33 -0500
From: Peter Xu <peterx@...hat.com>
To: Ackerley Tng <ackerleytng@...gle.com>
Cc: tabba@...gle.com, quic_eberman@...cinc.com, roypat@...zon.co.uk,
jgg@...dia.com, david@...hat.com, rientjes@...gle.com,
fvdl@...gle.com, jthoughton@...gle.com, seanjc@...gle.com,
pbonzini@...hat.com, zhiquan1.li@...el.com, fan.du@...el.com,
jun.miao@...el.com, isaku.yamahata@...el.com, muchun.song@...ux.dev,
mike.kravetz@...cle.com, erdemaktas@...gle.com,
vannapurve@...gle.com, qperret@...gle.com, jhubbard@...dia.com,
willy@...radead.org, shuah@...nel.org, brauner@...nel.org,
bfoster@...hat.com, kent.overstreet@...ux.dev, pvorel@...e.cz,
rppt@...nel.org, richard.weiyang@...il.com, anup@...infault.org,
haibo1.xu@...el.com, ajones@...tanamicro.com, vkuznets@...hat.com,
maciej.wieczor-retman@...el.com, pgonda@...gle.com,
oliver.upton@...ux.dev, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kvm@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-fsdevel@...ck.org
Subject: Re: [RFC PATCH 14/39] KVM: guest_memfd: hugetlb: initialization and
cleanup
On Tue, Sep 10, 2024 at 11:43:45PM +0000, Ackerley Tng wrote:
> +/**
> + * Removes folios in range [@lstart, @lend) from page cache of inode, updates
> + * inode metadata and hugetlb reservations.
> + */
> +static void kvm_gmem_hugetlb_truncate_folios_range(struct inode *inode,
> + loff_t lstart, loff_t lend)
> +{
> + struct kvm_gmem_hugetlb *hgmem;
> + struct hstate *h;
> + int gbl_reserve;
> + int num_freed;
> +
> + hgmem = kvm_gmem_hgmem(inode);
> + h = hgmem->h;
> +
> + num_freed = kvm_gmem_hugetlb_filemap_remove_folios(inode->i_mapping,
> + h, lstart, lend);
> +
> + gbl_reserve = hugepage_subpool_put_pages(hgmem->spool, num_freed);
> + hugetlb_acct_memory(h, -gbl_reserve);
I wonder whether this is needed, and whether hugetlb_acct_memory() needs to
be exported in the other patch.
IIUC subpools manages the global reservation on its own when min_pages is
set (which should be gmem's case, where both max/min set to gmem size).
That's in hugepage_put_subpool() -> unlock_or_release_subpool().
> +
> + spin_lock(&inode->i_lock);
> + inode->i_blocks -= blocks_per_huge_page(h) * num_freed;
> + spin_unlock(&inode->i_lock);
> +}
--
Peter Xu
Powered by blists - more mailing lists