[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z8ncWAP7ln1St5W-@x1.local>
Date: Thu, 6 Mar 2025 12:33:12 -0500
From: Peter Xu <peterx@...hat.com>
To: Ackerley Tng <ackerleytng@...gle.com>
Cc: tabba@...gle.com, quic_eberman@...cinc.com, roypat@...zon.co.uk,
jgg@...dia.com, david@...hat.com, rientjes@...gle.com,
fvdl@...gle.com, jthoughton@...gle.com, seanjc@...gle.com,
pbonzini@...hat.com, zhiquan1.li@...el.com, fan.du@...el.com,
jun.miao@...el.com, isaku.yamahata@...el.com, muchun.song@...ux.dev,
mike.kravetz@...cle.com, erdemaktas@...gle.com,
vannapurve@...gle.com, qperret@...gle.com, jhubbard@...dia.com,
willy@...radead.org, shuah@...nel.org, brauner@...nel.org,
bfoster@...hat.com, kent.overstreet@...ux.dev, pvorel@...e.cz,
rppt@...nel.org, richard.weiyang@...il.com, anup@...infault.org,
haibo1.xu@...el.com, ajones@...tanamicro.com, vkuznets@...hat.com,
maciej.wieczor-retman@...el.com, pgonda@...gle.com,
oliver.upton@...ux.dev, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kvm@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-fsdevel@...ck.org
Subject: Re: [RFC PATCH 14/39] KVM: guest_memfd: hugetlb: initialization and
cleanup
On Tue, Sep 10, 2024 at 11:43:45PM +0000, Ackerley Tng wrote:
> +static int kvm_gmem_hugetlb_filemap_remove_folios(struct address_space *mapping,
> + struct hstate *h,
> + loff_t lstart, loff_t lend)
> +{
> + const pgoff_t end = lend >> PAGE_SHIFT;
> + pgoff_t next = lstart >> PAGE_SHIFT;
> + struct folio_batch fbatch;
> + int num_freed = 0;
> +
> + folio_batch_init(&fbatch);
> + while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) {
> + int i;
> + for (i = 0; i < folio_batch_count(&fbatch); ++i) {
> + struct folio *folio;
> + pgoff_t hindex;
> + u32 hash;
> +
> + folio = fbatch.folios[i];
> + hindex = folio->index >> huge_page_order(h);
> + hash = hugetlb_fault_mutex_hash(mapping, hindex);
> +
> + mutex_lock(&hugetlb_fault_mutex_table[hash]);
I'm debugging some issue and this caught my attention. IIUC we need to
unmap the last time here with the fault mutex, right? Something like:
unmap_mapping_range(mapping, lstart, lend, 0);
Otherwise I don't know what protects a concurrent fault from happening when
removing the folio from the page cache simultaneously. Could refer to
remove_inode_single_folio() for hugetlbfs. For generic folios, it normally
needs the folio lock when unmap, iiuc, but here the mutex should be fine.
So far, even with the line added, my issue still didn't yet go away.
However I figured I should raise this up here anyway at least as a pure
question.
> + kvm_gmem_hugetlb_filemap_remove_folio(folio);
> + mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> +
> + num_freed++;
> + }
> + folio_batch_release(&fbatch);
> + cond_resched();
> + }
> +
> + return num_freed;
> +}
--
Peter Xu
Powered by blists - more mailing lists