[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b11e9316-4460-6502-0e1c-74555ff84d6a@redhat.com>
Date: Mon, 4 Jul 2016 10:14:56 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
Neo Jia <cjia@...dia.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Kirti Wankhede <kwankhede@...dia.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed
On 04/07/2016 09:59, Xiao Guangrong wrote:
>
>> But apart from this, it's much more obvious to consider the refcount.
>> The x86 MMU code doesn't care if the page is reserved or not;
>> mmu_set_spte does a kvm_release_pfn_clean, hence it makes sense for
>> hva_to_pfn_remapped to try doing a get_page (via kvm_get_pfn) after
>> invoking the fault handler, just like the get_user_pages family of
>> function does.
>
> Well, it's little strange as you always try to get refcont
> for a PFNMAP region without MIXEDMAP which indicates all the memory
> in this region is no 'struct page' backend.
Fair enough, I can modify the comment.
/*
* In case the VMA has VM_MIXEDMAP set, whoever called remap_pfn_range
* is also going to call e.g. unmap_mapping_range before the underlying
* non-reserved pages are freed, which will then call our MMU notifier.
* We still have to get a reference here to the page, because the callers
* of *hva_to_pfn* and *gfn_to_pfn* ultimately end up doing a
* kvm_release_pfn_clean on the returned pfn. If the pfn is
* reserved, the kvm_get_pfn/kvm_release_pfn_clean pair will simply
* do nothing.
*/
Paolo
> But it works as kvm_{get, release}_* have already been aware of
> reserved_pfn, so i am okay with it......
Powered by blists - more mailing lists