[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2130082365.883434.1533803526182.JavaMail.zimbra@redhat.com>
Date: Thu, 9 Aug 2018 04:32:06 -0400 (EDT)
From: Pankaj Gupta <pagupta@...hat.com>
To: Zhang Yi <yi.z.zhang@...ux.intel.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvdimm@...ts.01.org, pbonzini@...hat.com,
dan j williams <dan.j.williams@...el.com>, jack@...e.cz,
hch@....de, yu c zhang <yu.c.zhang@...el.com>, linux-mm@...ck.org,
rkrcmar@...hat.com, yi z zhang <yi.z.zhang@...el.com>
Subject: Re: [PATCH V3 4/4] kvm: add a check if pfn is from NVDIMM pmem.
>
> For device specific memory space, when we move these area of pfn to
> memory zone, we will set the page reserved flag at that time, some of
> these reserved for device mmio, and some of these are not, such as
> NVDIMM pmem.
>
> Now, we map these dev_dax or fs_dax pages to kvm for DIMM/NVDIMM
> backend, since these pages are reserved. the check of
> kvm_is_reserved_pfn() misconceives those pages as MMIO. Therefor, we
> introduce 2 page map types, MEMORY_DEVICE_FS_DAX/MEMORY_DEVICE_DEV_DAX,
> to indentify these pages are from NVDIMM pmem. and let kvm treat these
s/indentify/identify & remove '.'
> as normal pages.
>
> Without this patch, Many operations will be missed due to this
> mistreatment to pmem pages. For example, a page may not have chance to
> be unpinned for KVM guest(in kvm_release_pfn_clean); not able to be
> marked as dirty/accessed(in kvm_set_pfn_dirty/accessed) etc
>
> Signed-off-by: Zhang Yi <yi.z.zhang@...ux.intel.com>
> ---
> virt/kvm/kvm_main.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c44c406..969b6ca 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -147,8 +147,12 @@ __weak void
> kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
>
> bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
> {
> - if (pfn_valid(pfn))
> - return PageReserved(pfn_to_page(pfn));
> + struct page *page;
> +
> + if (pfn_valid(pfn)) {
> + page = pfn_to_page(pfn);
> + return PageReserved(page) && !is_dax_page(page);
> + }
>
> return true;
> }
> --
> 2.7.4
>
>
Powered by blists - more mailing lists