[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180830192312.GA84758@tiger-server>
Date: Fri, 31 Aug 2018 03:23:13 +0800
From: Yi Zhang <yi.z.zhang@...ux.intel.com>
To: Pankaj Gupta <pagupta@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvdimm@...ts.01.org, pbonzini@...hat.com,
dan j williams <dan.j.williams@...el.com>,
dave jiang <dave.jiang@...el.com>,
yu c zhang <yu.c.zhang@...el.com>, david@...hat.com,
jack@...e.cz, hch@....de, linux-mm@...ck.org, rkrcmar@...hat.com,
jglisse@...hat.com, yi z zhang <yi.z.zhang@...el.com>
Subject: Re: [PATCH V4 4/4] kvm: add a check if pfn is from NVDIMM pmem.
On 2018-08-29 at 06:15:48 -0400, Pankaj Gupta wrote:
>
> >
> > For device specific memory space, when we move these area of pfn to
> > memory zone, we will set the page reserved flag at that time, some of
> > these reserved for device mmio, and some of these are not, such as
> > NVDIMM pmem.
> >
> > Now, we map these dev_dax or fs_dax pages to kvm for DIMM/NVDIMM
> > backend, since these pages are reserved, the check of
> > kvm_is_reserved_pfn() misconceives those pages as MMIO. Therefor, we
> > introduce 2 page map types, MEMORY_DEVICE_FS_DAX/MEMORY_DEVICE_DEV_DAX,
> > to identify these pages are from NVDIMM pmem and let kvm treat these
> > as normal pages.
> >
> > Without this patch, many operations will be missed due to this
> > mistreatment to pmem pages, for example, a page may not have chance to
> > be unpinned for KVM guest(in kvm_release_pfn_clean), not able to be
> > marked as dirty/accessed(in kvm_set_pfn_dirty/accessed) etc.
> >
> > Signed-off-by: Zhang Yi <yi.z.zhang@...ux.intel.com>
> > ---
> > virt/kvm/kvm_main.c | 8 ++++++--
> > 1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index c44c406..969b6ca 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -147,8 +147,12 @@ __weak void
> > kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
> >
> > bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
> > {
> > - if (pfn_valid(pfn))
> > - return PageReserved(pfn_to_page(pfn));
> > + struct page *page;
> > +
> > + if (pfn_valid(pfn)) {
> > + page = pfn_to_page(pfn);
> > + return PageReserved(page) && !is_dax_page(page);
> > + }
> >
> > return true;
> > }
>
> Acked-by: Pankaj Gupta <pagupta@...hat.com>
Thanks for your kindly review, Pankaj, as all the patch [1,2,3,4]/4 got
the reviewed[acked]-by, can we Queue this by now?
>
> > --
> > 2.7.4
> >
> >
Powered by blists - more mailing lists