lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220810092532.GD862421@chaop.bj.intel.com>
Date:   Wed, 10 Aug 2022 17:25:32 +0800
From:   Chao Peng <chao.p.peng@...ux.intel.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-api@...r.kernel.org, linux-doc@...r.kernel.org,
        qemu-devel@...gnu.org, linux-kselftest@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
        Hugh Dickins <hughd@...gle.com>,
        Jeff Layton <jlayton@...nel.org>,
        "J . Bruce Fields" <bfields@...ldses.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Shuah Khan <shuah@...nel.org>, Mike Rapoport <rppt@...nel.org>,
        Steven Price <steven.price@....com>,
        "Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
        Vlastimil Babka <vbabka@...e.cz>,
        Vishal Annapurve <vannapurve@...gle.com>,
        Yu Zhang <yu.c.zhang@...ux.intel.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        luto@...nel.org, jun.nakajima@...el.com, dave.hansen@...el.com,
        ak@...ux.intel.com, aarcange@...hat.com, ddutile@...hat.com,
        dhildenb@...hat.com, Quentin Perret <qperret@...gle.com>,
        Michael Roth <michael.roth@....com>, mhocko@...e.com,
        Muchun Song <songmuchun@...edance.com>
Subject: Re: [PATCH v7 04/14] mm/shmem: Support memfile_notifier

On Fri, Aug 05, 2022 at 03:26:02PM +0200, David Hildenbrand wrote:
> On 06.07.22 10:20, Chao Peng wrote:
> > From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> > 
> > Implement shmem as a memfile_notifier backing store. Essentially it
> > interacts with the memfile_notifier feature flags for userspace
> > access/page migration/page reclaiming and implements the necessary
> > memfile_backing_store callbacks.
> > 
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > Signed-off-by: Chao Peng <chao.p.peng@...ux.intel.com>
> > ---
> 
> [...]
> 
> > +#ifdef CONFIG_MEMFILE_NOTIFIER
> > +static struct memfile_node *shmem_lookup_memfile_node(struct file *file)
> > +{
> > +	struct inode *inode = file_inode(file);
> > +
> > +	if (!shmem_mapping(inode->i_mapping))
> > +		return NULL;
> > +
> > +	return  &SHMEM_I(inode)->memfile_node;
> > +}
> > +
> > +
> > +static int shmem_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn,
> > +			 int *order)
> > +{
> > +	struct page *page;
> > +	int ret;
> > +
> > +	ret = shmem_getpage(file_inode(file), offset, &page, SGP_WRITE);
> > +	if (ret)
> > +		return ret;
> > +
> > +	unlock_page(page);
> > +	*pfn = page_to_pfn_t(page);
> > +	*order = thp_order(compound_head(page));
> > +	return 0;
> > +}
> > +
> > +static void shmem_put_pfn(pfn_t pfn)
> > +{
> > +	struct page *page = pfn_t_to_page(pfn);
> > +
> > +	if (!page)
> > +		return;
> > +
> > +	put_page(page);
> 
> 
> Why do we export shmem_get_pfn/shmem_put_pfn and not simply
> 
> get_folio()
> 
> and let the caller deal with putting the folio? What's the reason to
> 
> a) Operate on PFNs and not folios
> b) Have these get/put semantics?

We have a design assumption that somedays this can even support non-page
based backing stores. There are some discussions:
  https://lkml.org/lkml/2022/3/28/1440
I should add document for this two callbacks.

> 
> > +}
> > +
> > +static struct memfile_backing_store shmem_backing_store = {
> > +	.lookup_memfile_node = shmem_lookup_memfile_node,
> > +	.get_pfn = shmem_get_pfn,
> > +	.put_pfn = shmem_put_pfn,
> > +};
> > +#endif /* CONFIG_MEMFILE_NOTIFIER */
> > +
> >  void __init shmem_init(void)
> >  {
> >  	int error;
> > @@ -3956,6 +4059,10 @@ void __init shmem_init(void)
> >  	else
> >  		shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */
> >  #endif
> > +
> > +#ifdef CONFIG_MEMFILE_NOTIFIER
> > +	memfile_register_backing_store(&shmem_backing_store);
> 
> Can we instead prove a dummy function that does nothing without
> CONFIG_MEMFILE_NOTIFIER?

Sounds good.

Chao
> 
> > +#endif
> >  	return;
> >  
> >  out1:
> 
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ