[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221021135434.GB3607894@chaop.bj.intel.com>
Date: Fri, 21 Oct 2022 21:54:34 +0800
From: Chao Peng <chao.p.peng@...ux.intel.com>
To: Vishal Annapurve <vannapurve@...gle.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
"Gupta, Pankaj" <pankaj.gupta@....com>,
Vlastimil Babka <vbabka@...e.cz>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-api@...r.kernel.org,
linux-doc@...r.kernel.org, qemu-devel@...gnu.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>, Mike Rapoport <rppt@...nel.org>,
Steven Price <steven.price@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Yu Zhang <yu.c.zhang@...ux.intel.com>, luto@...nel.org,
jun.nakajima@...el.com, dave.hansen@...el.com, ak@...ux.intel.com,
david@...hat.com, aarcange@...hat.com, ddutile@...hat.com,
dhildenb@...hat.com, Quentin Perret <qperret@...gle.com>,
Michael Roth <michael.roth@....com>, mhocko@...e.com,
Muchun Song <songmuchun@...edance.com>, wei.w.wang@...el.com
Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd
On Thu, Oct 20, 2022 at 04:20:58PM +0530, Vishal Annapurve wrote:
> On Wed, Oct 19, 2022 at 9:02 PM Kirill A . Shutemov
> <kirill.shutemov@...ux.intel.com> wrote:
> >
> > On Tue, Oct 18, 2022 at 07:12:10PM +0530, Vishal Annapurve wrote:
> > > On Tue, Oct 18, 2022 at 3:27 AM Kirill A . Shutemov
> > > <kirill.shutemov@...ux.intel.com> wrote:
> > > >
> > > > On Mon, Oct 17, 2022 at 06:39:06PM +0200, Gupta, Pankaj wrote:
> > > > > On 10/17/2022 6:19 PM, Kirill A . Shutemov wrote:
> > > > > > On Mon, Oct 17, 2022 at 03:00:21PM +0200, Vlastimil Babka wrote:
> > > > > > > On 9/15/22 16:29, Chao Peng wrote:
> > > > > > > > From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> > > > > > > >
> > > > > > > > KVM can use memfd-provided memory for guest memory. For normal userspace
> > > > > > > > accessible memory, KVM userspace (e.g. QEMU) mmaps the memfd into its
> > > > > > > > virtual address space and then tells KVM to use the virtual address to
> > > > > > > > setup the mapping in the secondary page table (e.g. EPT).
> > > > > > > >
> > > > > > > > With confidential computing technologies like Intel TDX, the
> > > > > > > > memfd-provided memory may be encrypted with special key for special
> > > > > > > > software domain (e.g. KVM guest) and is not expected to be directly
> > > > > > > > accessed by userspace. Precisely, userspace access to such encrypted
> > > > > > > > memory may lead to host crash so it should be prevented.
> > > > > > > >
> > > > > > > > This patch introduces userspace inaccessible memfd (created with
> > > > > > > > MFD_INACCESSIBLE). Its memory is inaccessible from userspace through
> > > > > > > > ordinary MMU access (e.g. read/write/mmap) but can be accessed via
> > > > > > > > in-kernel interface so KVM can directly interact with core-mm without
> > > > > > > > the need to map the memory into KVM userspace.
> > > > > > > >
> > > > > > > > It provides semantics required for KVM guest private(encrypted) memory
> > > > > > > > support that a file descriptor with this flag set is going to be used as
> > > > > > > > the source of guest memory in confidential computing environments such
> > > > > > > > as Intel TDX/AMD SEV.
> > > > > > > >
> > > > > > > > KVM userspace is still in charge of the lifecycle of the memfd. It
> > > > > > > > should pass the opened fd to KVM. KVM uses the kernel APIs newly added
> > > > > > > > in this patch to obtain the physical memory address and then populate
> > > > > > > > the secondary page table entries.
> > > > > > > >
> > > > > > > > The userspace inaccessible memfd can be fallocate-ed and hole-punched
> > > > > > > > from userspace. When hole-punching happens, KVM can get notified through
> > > > > > > > inaccessible_notifier it then gets chance to remove any mapped entries
> > > > > > > > of the range in the secondary page tables.
> > > > > > > >
> > > > > > > > The userspace inaccessible memfd itself is implemented as a shim layer
> > > > > > > > on top of real memory file systems like tmpfs/hugetlbfs but this patch
> > > > > > > > only implemented tmpfs. The allocated memory is currently marked as
> > > > > > > > unmovable and unevictable, this is required for current confidential
> > > > > > > > usage. But in future this might be changed.
> > > > > > > >
> > > > > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > > > > > > > Signed-off-by: Chao Peng <chao.p.peng@...ux.intel.com>
> > > > > > > > ---
> > > > > > >
> > > > > > > ...
> > > > > > >
> > > > > > > > +static long inaccessible_fallocate(struct file *file, int mode,
> > > > > > > > + loff_t offset, loff_t len)
> > > > > > > > +{
> > > > > > > > + struct inaccessible_data *data = file->f_mapping->private_data;
> > > > > > > > + struct file *memfd = data->memfd;
> > > > > > > > + int ret;
> > > > > > > > +
> > > > > > > > + if (mode & FALLOC_FL_PUNCH_HOLE) {
> > > > > > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len))
> > > > > > > > + return -EINVAL;
> > > > > > > > + }
> > > > > > > > +
> > > > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len);
> > > > > > > > + inaccessible_notifier_invalidate(data, offset, offset + len);
> > > > > > >
> > > > > > > Wonder if invalidate should precede the actual hole punch, otherwise we open
> > > > > > > a window where the page tables point to memory no longer valid?
> > > > > >
> > > > > > Yes, you are right. Thanks for catching this.
> > > > >
> > > > > I also noticed this. But then thought the memory would be anyways zeroed
> > > > > (hole punched) before this call?
> > > >
> > > > Hole punching can free pages, given that offset/len covers full page.
> > > >
> > > > --
> > > > Kiryl Shutsemau / Kirill A. Shutemov
> > >
> > > I think moving this notifier_invalidate before fallocate may not solve
> > > the problem completely. Is it possible that between invalidate and
> > > fallocate, KVM tries to handle the page fault for the guest VM from
> > > another vcpu and uses the pages to be freed to back gpa ranges? Should
> > > hole punching here also update mem_attr first to say that KVM should
> > > consider the corresponding gpa ranges to be no more backed by
> > > inaccessible memfd?
> >
> > We rely on external synchronization to prevent this. See code around
> > mmu_invalidate_retry_hva().
> >
> > --
> > Kiryl Shutsemau / Kirill A. Shutemov
>
> IIUC, mmu_invalidate_retry_hva/gfn ensures that page faults on gfn
> ranges that are being invalidated are retried till invalidation is
> complete. In this case, is it possible that KVM tries to serve the
> page fault after inaccessible_notifier_invalidate is complete but
> before fallocate could punch hole into the files?
> e.g.
> inaccessible_notifier_invalidate(...)
> ... (system event preempting this control flow, giving a window for
> the guest to retry accessing the gfn range which was invalidated)
> fallocate(.., PUNCH_HOLE..)
Looks this is something can happen. And sounds to me the solution needs
just follow the mmu_notifier's way of using a invalidate_start/end pair.
invalidate_start() --> kvm->mmu_invalidate_in_progress++;
zap KVM page table entries;
fallocate()
invalidate_end() --> kvm->mmu_invalidate_in_progress--;
Then during invalidate_start/end time window mmu_invalidate_retry_gfn
checks 'mmu_invalidate_in_progress' and prevent repopulating the same
page in KVM page table.
if(kvm->mmu_invalidate_in_progress)
return 1; /* retry */
Thanks,
Chao
Powered by blists - more mailing lists