[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05a5ec889f3e04d71c0ed067bedea2e3b0eacd00.camel@intel.com>
Date: Fri, 23 Dec 2022 00:50:22 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "Christopherson,, Sean" <seanjc@...gle.com>,
"chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>
CC: "tglx@...utronix.de" <tglx@...utronix.de>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"jmattson@...gle.com" <jmattson@...gle.com>,
"Hocko, Michal" <mhocko@...e.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"Lutomirski, Andy" <luto@...nel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"tabba@...gle.com" <tabba@...gle.com>,
"david@...hat.com" <david@...hat.com>,
"michael.roth@....com" <michael.roth@....com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"corbet@....net" <corbet@....net>,
"qemu-devel@...gnu.org" <qemu-devel@...gnu.org>,
"dhildenb@...hat.com" <dhildenb@...hat.com>,
"bfields@...ldses.org" <bfields@...ldses.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, "bp@...en8.de" <bp@...en8.de>,
"ddutile@...hat.com" <ddutile@...hat.com>,
"rppt@...nel.org" <rppt@...nel.org>,
"shuah@...nel.org" <shuah@...nel.org>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"mail@...iej.szmigiero.name" <mail@...iej.szmigiero.name>,
"naoya.horiguchi@....com" <naoya.horiguchi@....com>,
"qperret@...gle.com" <qperret@...gle.com>,
"arnd@...db.de" <arnd@...db.de>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
"yu.c.zhang@...ux.intel.com" <yu.c.zhang@...ux.intel.com>,
"aarcange@...hat.com" <aarcange@...hat.com>,
"wanpengli@...cent.com" <wanpengli@...cent.com>,
"vannapurve@...gle.com" <vannapurve@...gle.com>,
"hughd@...gle.com" <hughd@...gle.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
"jlayton@...nel.org" <jlayton@...nel.org>,
"joro@...tes.org" <joro@...tes.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Wang, Wei W" <wei.w.wang@...el.com>,
"steven.price@....com" <steven.price@....com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"Hansen, Dave" <dave.hansen@...el.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linmiaohe@...wei.com" <linmiaohe@...wei.com>
Subject: Re: [PATCH v10 1/9] mm: Introduce memfd_restricted system call to
create restricted user memory
On Thu, 2022-12-22 at 18:15 +0000, Sean Christopherson wrote:
> On Wed, Dec 21, 2022, Chao Peng wrote:
> > On Tue, Dec 20, 2022 at 08:33:05AM +0000, Huang, Kai wrote:
> > > On Tue, 2022-12-20 at 15:22 +0800, Chao Peng wrote:
> > > > On Mon, Dec 19, 2022 at 08:48:10AM +0000, Huang, Kai wrote:
> > > > > On Mon, 2022-12-19 at 15:53 +0800, Chao Peng wrote:
> > > But for non-restricted-mem case, it is correct for KVM to decrease page's
> > > refcount after setting up mapping in the secondary mmu, otherwise the page will
> > > be pinned by KVM for normal VM (since KVM uses GUP to get the page).
> >
> > That's true. Actually even true for restrictedmem case, most likely we
> > will still need the kvm_release_pfn_clean() for KVM generic code. On one
> > side, other restrictedmem users like pKVM may not require page pinning
> > at all. On the other side, see below.
> >
> > >
> > > So what we are expecting is: for KVM if the page comes from restricted mem, then
> > > KVM cannot decrease the refcount, otherwise for normal page via GUP KVM should.
>
> No, requiring the user (KVM) to guard against lack of support for page migration
> in restricted mem is a terrible API. It's totally fine for restricted mem to not
> support page migration until there's a use case, but punting the problem to KVM
> is not acceptable. Restricted mem itself doesn't yet support page migration,
> e.g. explosions would occur even if KVM wanted to allow migration since there is
> no notification to invalidate existing mappings.
>
>
Yes totally agree (I also replied separately).
Powered by blists - more mailing lists