lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Jul 2016 12:02:06 +0800
From:	Xiao Guangrong <guangrong.xiao@...ux.intel.com>
To:	Neo Jia <cjia@...dia.com>
Cc:	Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, Kirti Wankhede <kwankhede@...dia.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed



On 07/06/2016 10:57 AM, Neo Jia wrote:
> On Wed, Jul 06, 2016 at 10:35:18AM +0800, Xiao Guangrong wrote:
>>
>>
>> On 07/06/2016 10:18 AM, Neo Jia wrote:
>>> On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote:
>>>>
>>>>
>>>> On 07/05/2016 08:18 PM, Paolo Bonzini wrote:
>>>>>
>>>>>
>>>>> On 05/07/2016 07:41, Neo Jia wrote:
>>>>>> On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote:
>>>>>>> The vGPU folks would like to trap the first access to a BAR by setting
>>>>>>> vm_ops on the VMAs produced by mmap-ing a VFIO device.  The fault handler
>>>>>>> then can use remap_pfn_range to place some non-reserved pages in the VMA.
>>>>>>>
>>>>>>> KVM lacks support for this kind of non-linear VM_PFNMAP mapping, and these
>>>>>>> patches should fix this.
>>>>>>
>>>>>> Hi Paolo,
>>>>>>
>>>>>> I have tested your patches with the mediated passthru patchset that is being
>>>>>> reviewed in KVM and QEMU mailing list.
>>>>>>
>>>>>> The fault handler gets called successfully and the previously mapped memory gets
>>>>>> unmmaped correctly via unmap_mapping_range.
>>>>>
>>>>> Great, then I'll include them in 4.8.
>>>>
>>>> Code is okay, but i still suspect if this implementation, fetch mmio pages in fault
>>>> handler, is needed. We'd better include these patches after the design of vfio
>>>> framework is decided.
>>>
>>> Hi Guangrong,
>>>
>>> I disagree. The design of VFIO framework has been actively discussed in the KVM
>>> and QEMU mailing for a while and the fault handler is agreed upon to provide the
>>> flexibility for different driver vendors' implementation. With that said, I am
>>> still open to discuss with you and anybody else about this framework as the goal
>>> is to allow multiple vendor to plugin into this framework to support their
>>> mediated device virtualization scheme, such as Intel, IBM and us.
>>
>> The discussion is still going on. And current vfio patchset we reviewed is still
>> problematic.
>
> My point is the fault handler part has been discussed already, with that said I
> am always open to any constructive suggestions to make things better and
> maintainable. (Appreciate your code review on the VFIO thread, I think we still
> own you another response, will do that.)
>

It always can be changed especially the vfio patchset is not in a good shape.

>>
>>>
>>> May I ask you what the exact issue you have with this interface for Intel to support
>>> your own GPU virtualization?
>>
>> Intel's vGPU can work with this framework. We really appreciate your / nvidia's
>> contribution.
>
> Then, I don't think we should embargo Paolo's patch.

This patchset is specific for the framework design, i.e, mapping memory when fault
happens rather than mmap(), and this design is exact what we are discussing for
nearly two days.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ