lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8ff54c7b-2e64-4503-03f6-1a858d9d5746@redhat.com>
Date:	Mon, 4 Jul 2016 09:48:52 +0200
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
	Neo Jia <cjia@...dia.com>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	Kirti Wankhede <kwankhede@...dia.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed



On 04/07/2016 09:37, Xiao Guangrong wrote:
>>>
>>
>> It actually is a portion of the physical mmio which is set by vfio mmap.
> 
> So i do not think we need to care its refcount, i,e, we can consider it
> as reserved_pfn,
> Paolo?

nVidia provided me (offlist) with a simple patch that modified VFIO to
exhibit the problem, and it didn't use reserved PFNs.  This is why the
commit message for the patch is not entirely accurate.

But apart from this, it's much more obvious to consider the refcount.
The x86 MMU code doesn't care if the page is reserved or not;
mmu_set_spte does a kvm_release_pfn_clean, hence it makes sense for
hva_to_pfn_remapped to try doing a get_page (via kvm_get_pfn) after
invoking the fault handler, just like the get_user_pages family of
function does.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ