[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aDVBxa2IY8V7dluq@yzhao56-desk.sh.intel.com>
Date: Tue, 27 May 2025 12:38:29 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Ackerley Tng <ackerleytng@...gle.com>, <kvm@...r.kernel.org>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>, <x86@...nel.org>,
<linux-fsdevel@...r.kernel.org>, <aik@....com>, <ajones@...tanamicro.com>,
<akpm@...ux-foundation.org>, <amoorthy@...gle.com>,
<anthony.yznaga@...cle.com>, <anup@...infault.org>, <aou@...s.berkeley.edu>,
<bfoster@...hat.com>, <binbin.wu@...ux.intel.com>, <brauner@...nel.org>,
<catalin.marinas@....com>, <chao.p.peng@...el.com>, <chenhuacai@...nel.org>,
<dave.hansen@...el.com>, <david@...hat.com>, <dmatlack@...gle.com>,
<dwmw@...zon.co.uk>, <erdemaktas@...gle.com>, <fan.du@...el.com>,
<fvdl@...gle.com>, <graf@...zon.com>, <haibo1.xu@...el.com>,
<hch@...radead.org>, <hughd@...gle.com>, <ira.weiny@...el.com>,
<isaku.yamahata@...el.com>, <jack@...e.cz>, <james.morse@....com>,
<jarkko@...nel.org>, <jgg@...pe.ca>, <jgowans@...zon.com>,
<jhubbard@...dia.com>, <jroedel@...e.de>, <jthoughton@...gle.com>,
<jun.miao@...el.com>, <kai.huang@...el.com>, <keirf@...gle.com>,
<kent.overstreet@...ux.dev>, <kirill.shutemov@...el.com>,
<liam.merwick@...cle.com>, <maciej.wieczor-retman@...el.com>,
<mail@...iej.szmigiero.name>, <maz@...nel.org>, <mic@...ikod.net>,
<michael.roth@....com>, <mpe@...erman.id.au>, <muchun.song@...ux.dev>,
<nikunj@....com>, <nsaenz@...zon.es>, <oliver.upton@...ux.dev>,
<palmer@...belt.com>, <pankaj.gupta@....com>, <paul.walmsley@...ive.com>,
<pbonzini@...hat.com>, <pdurrant@...zon.co.uk>, <peterx@...hat.com>,
<pgonda@...gle.com>, <pvorel@...e.cz>, <qperret@...gle.com>,
<quic_cvanscha@...cinc.com>, <quic_eberman@...cinc.com>,
<quic_mnalajal@...cinc.com>, <quic_pderrin@...cinc.com>,
<quic_pheragu@...cinc.com>, <quic_svaddagi@...cinc.com>,
<quic_tsoni@...cinc.com>, <richard.weiyang@...il.com>,
<rick.p.edgecombe@...el.com>, <rientjes@...gle.com>, <roypat@...zon.co.uk>,
<rppt@...nel.org>, <seanjc@...gle.com>, <shuah@...nel.org>,
<steven.price@....com>, <steven.sistare@...cle.com>,
<suzuki.poulose@....com>, <tabba@...gle.com>, <thomas.lendacky@....com>,
<usama.arif@...edance.com>, <vannapurve@...gle.com>, <vbabka@...e.cz>,
<viro@...iv.linux.org.uk>, <vkuznets@...hat.com>, <wei.w.wang@...el.com>,
<will@...nel.org>, <willy@...radead.org>, <xiaoyao.li@...el.com>,
<yilun.xu@...el.com>, <yuzenghui@...wei.com>, <zhiquan1.li@...el.com>
Subject: Re: [RFC PATCH v2 38/51] KVM: guest_memfd: Split allocator pages for
guest_memfd use
> > +static int kvm_gmem_restructure_folios_in_range(struct inode *inode,
> > + pgoff_t start, size_t nr_pages,
> > + bool is_split_operation)
> > +{
> > + size_t to_nr_pages;
> > + pgoff_t index;
> > + pgoff_t end;
> > + void *priv;
> > + int ret;
> > +
> > + if (!kvm_gmem_has_custom_allocator(inode))
> > + return 0;
> > +
> > + end = start + nr_pages;
> > +
> > + /* Round to allocator page size, to check all (huge) pages in range. */
> > + priv = kvm_gmem_allocator_private(inode);
> > + to_nr_pages = kvm_gmem_allocator_ops(inode)->nr_pages_in_folio(priv);
> > +
> > + start = round_down(start, to_nr_pages);
> > + end = round_up(end, to_nr_pages);
> > +
> > + for (index = start; index < end; index += to_nr_pages) {
> > + struct folio *f;
> > +
> > + f = filemap_get_folio(inode->i_mapping, index);
> > + if (IS_ERR(f))
> > + continue;
> > +
> > + /* Leave just filemap's refcounts on the folio. */
> > + folio_put(f);
> > +
> > + if (is_split_operation)
> > + ret = kvm_gmem_split_folio_in_filemap(inode, f);
> The split operation is performed after kvm_gmem_unmap_private() within
> kvm_gmem_convert_should_proceed(), right?
>
> So, it seems that that it's not necessary for TDX to avoid holding private page
> references, as TDX must have released the page refs after
> kvm_gmem_unmap_private() (except when there's TDX module or KVM bug).
Oops. Please ignore this one.
The unmap does not necessarily cover the entire folio range, so split still
requires TDX not to hold ref count.
Powered by blists - more mailing lists