[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220713074458.GB2831541@chaop.bj.intel.com>
Date: Wed, 13 Jul 2022 15:44:58 +0800
From: Chao Peng <chao.p.peng@...ux.intel.com>
To: "Gupta, Pankaj" <pankaj.gupta@....com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-api@...r.kernel.org, linux-doc@...r.kernel.org,
qemu-devel@...gnu.org, linux-kselftest@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>, Mike Rapoport <rppt@...nel.org>,
Steven Price <steven.price@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Vishal Annapurve <vannapurve@...gle.com>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
luto@...nel.org, jun.nakajima@...el.com, dave.hansen@...el.com,
ak@...ux.intel.com, david@...hat.com, aarcange@...hat.com,
ddutile@...hat.com, dhildenb@...hat.com,
Quentin Perret <qperret@...gle.com>,
Michael Roth <michael.roth@....com>, mhocko@...e.com,
Muchun Song <songmuchun@...edance.com>
Subject: Re: [PATCH v7 04/14] mm/shmem: Support memfile_notifier
On Tue, Jul 12, 2022 at 08:02:34PM +0200, Gupta, Pankaj wrote:
> On 7/6/2022 10:20 AM, Chao Peng wrote:
> > From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> >
> > Implement shmem as a memfile_notifier backing store. Essentially it
> > interacts with the memfile_notifier feature flags for userspace
> > access/page migration/page reclaiming and implements the necessary
> > memfile_backing_store callbacks.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > Signed-off-by: Chao Peng <chao.p.peng@...ux.intel.com>
> > ---
> > include/linux/shmem_fs.h | 2 +
> > mm/shmem.c | 109 ++++++++++++++++++++++++++++++++++++++-
> > 2 files changed, 110 insertions(+), 1 deletion(-)
...
> > +#ifdef CONFIG_MIGRATION
> > +static int shmem_migrate_page(struct address_space *mapping,
> > + struct page *newpage, struct page *page,
> > + enum migrate_mode mode)
> > +{
> > + struct inode *inode = mapping->host;
> > + struct shmem_inode_info *info = SHMEM_I(inode);
> > +
> > + if (info->memfile_node.flags & MEMFILE_F_UNMOVABLE)
> > + return -EOPNOTSUPP;
> > + return migrate_page(mapping, newpage, page, mode);
>
> Wondering how well page migrate would work for private pages
> on shmem memfd based backend?
>From high level:
- KVM unset MEMFILE_F_UNMOVABLE bit to indicate it capable of
migrating a page.
- Introduce new 'migrate' callback(s) to memfile_notifier_ops for KVM
to register.
- The callback is hooked to migrate_page() here.
- Once page migration requested, shmem calls into the 'migrate'
callback(s) to perform additional steps for encrypted memory (For
TDX we will call TDH.MEM.PAGE.RELOCATE).
Chao
>
> > +}
> > +#endif
> > +
> > const struct address_space_operations shmem_aops = {
> > .writepage = shmem_writepage,
> > .dirty_folio = noop_dirty_folio,
> > @@ -3814,7 +3872,7 @@ const struct address_space_operations shmem_aops = {
> > .write_end = shmem_write_end,
> > #endif
> > #ifdef CONFIG_MIGRATION
> > - .migratepage = migrate_page,
> > + .migratepage = shmem_migrate_page,
> > #endif
> > .error_remove_page = shmem_error_remove_page,
> > };
> > @@ -3931,6 +3989,51 @@ static struct file_system_type shmem_fs_type = {
> > .fs_flags = FS_USERNS_MOUNT,
> > };
Powered by blists - more mailing lists