[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <deba096c85e41c3a15d122f2159986a74b16770f.camel@intel.com>
Date: Mon, 19 Dec 2022 08:48:10 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>
CC: "tglx@...utronix.de" <tglx@...utronix.de>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Wang, Wei W" <wei.w.wang@...el.com>,
"jmattson@...gle.com" <jmattson@...gle.com>,
"Lutomirski, Andy" <luto@...nel.org>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"david@...hat.com" <david@...hat.com>,
"qemu-devel@...gnu.org" <qemu-devel@...gnu.org>,
"tabba@...gle.com" <tabba@...gle.com>,
"Hocko, Michal" <mhocko@...e.com>,
"michael.roth@....com" <michael.roth@....com>,
"corbet@....net" <corbet@....net>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"dhildenb@...hat.com" <dhildenb@...hat.com>,
"bfields@...ldses.org" <bfields@...ldses.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, "bp@...en8.de" <bp@...en8.de>,
"vannapurve@...gle.com" <vannapurve@...gle.com>,
"rppt@...nel.org" <rppt@...nel.org>,
"shuah@...nel.org" <shuah@...nel.org>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"mail@...iej.szmigiero.name" <mail@...iej.szmigiero.name>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
"qperret@...gle.com" <qperret@...gle.com>,
"arnd@...db.de" <arnd@...db.de>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"ddutile@...hat.com" <ddutile@...hat.com>,
"naoya.horiguchi@....com" <naoya.horiguchi@....com>,
"Christopherson,, Sean" <seanjc@...gle.com>,
"wanpengli@...cent.com" <wanpengli@...cent.com>,
"yu.c.zhang@...ux.intel.com" <yu.c.zhang@...ux.intel.com>,
"hughd@...gle.com" <hughd@...gle.com>,
"aarcange@...hat.com" <aarcange@...hat.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
"jlayton@...nel.org" <jlayton@...nel.org>,
"joro@...tes.org" <joro@...tes.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"steven.price@....com" <steven.price@....com>,
"Hansen, Dave" <dave.hansen@...el.com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linmiaohe@...wei.com" <linmiaohe@...wei.com>
Subject: Re: [PATCH v10 1/9] mm: Introduce memfd_restricted system call to
create restricted user memory
On Mon, 2022-12-19 at 15:53 +0800, Chao Peng wrote:
> >
> > [...]
> >
> > > +
> > > + /*
> > > + * These pages are currently unmovable so don't place them into
> > > movable
> > > + * pageblocks (e.g. CMA and ZONE_MOVABLE).
> > > + */
> > > + mapping = memfd->f_mapping;
> > > + mapping_set_unevictable(mapping);
> > > + mapping_set_gfp_mask(mapping,
> > > + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE);
> >
> > But, IIUC removing __GFP_MOVABLE flag here only makes page allocation from
> > non-
> > movable zones, but doesn't necessarily prevent page from being migrated. My
> > first glance is you need to implement either a_ops->migrate_folio() or just
> > get_page() after faulting in the page to prevent.
>
> The current api restrictedmem_get_page() already does this, after the
> caller calling it, it holds a reference to the page. The caller then
> decides when to call put_page() appropriately.
I tried to dig some history. Perhaps I am missing something, but it seems Kirill
said in v9 that this code doesn't prevent page migration, and we need to
increase page refcount in restrictedmem_get_page():
https://lore.kernel.org/linux-mm/20221129112139.usp6dqhbih47qpjl@box.shutemov.name/
But looking at this series it seems restrictedmem_get_page() in this v10 is
identical to the one in v9 (except v10 uses 'folio' instead of 'page')?
Anyway if this is not fixed, then it should be fixed. Otherwise, a comment at
the place where page refcount is increased will be helpful to help people
understand page migration is actually prevented.
Powered by blists - more mailing lists