[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <577A1C93.3010204@linux.intel.com>
Date: Mon, 4 Jul 2016 16:21:39 +0800
From: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
To: Paolo Bonzini <pbonzini@...hat.com>, Neo Jia <cjia@...dia.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Kirti Wankhede <kwankhede@...dia.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed
On 07/04/2016 04:14 PM, Paolo Bonzini wrote:
>
>
> On 04/07/2016 09:59, Xiao Guangrong wrote:
>>
>>> But apart from this, it's much more obvious to consider the refcount.
>>> The x86 MMU code doesn't care if the page is reserved or not;
>>> mmu_set_spte does a kvm_release_pfn_clean, hence it makes sense for
>>> hva_to_pfn_remapped to try doing a get_page (via kvm_get_pfn) after
>>> invoking the fault handler, just like the get_user_pages family of
>>> function does.
>>
>> Well, it's little strange as you always try to get refcont
>> for a PFNMAP region without MIXEDMAP which indicates all the memory
>> in this region is no 'struct page' backend.
>
> Fair enough, I can modify the comment.
>
> /*
> * In case the VMA has VM_MIXEDMAP set, whoever called remap_pfn_range
> * is also going to call e.g. unmap_mapping_range before the underlying
> * non-reserved pages are freed, which will then call our MMU notifier.
> * We still have to get a reference here to the page, because the callers
> * of *hva_to_pfn* and *gfn_to_pfn* ultimately end up doing a
> * kvm_release_pfn_clean on the returned pfn. If the pfn is
> * reserved, the kvm_get_pfn/kvm_release_pfn_clean pair will simply
> * do nothing.
> */
>
Excellent. I like it. :)
Powered by blists - more mailing lists