[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160704070314.GA13291@nvidia.com>
Date: Mon, 4 Jul 2016 00:03:20 -0700
From: Neo Jia <cjia@...dia.com>
To: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
CC: Paolo Bonzini <pbonzini@...hat.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>,
Kirti Wankhede <kwankhede@...dia.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed
On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote:
>
>
> On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
> >The vGPU folks would like to trap the first access to a BAR by setting
> >vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler
> >then can use remap_pfn_range to place some non-reserved pages in the VMA.
>
> Why does it require fetching the pfn when the fault is triggered rather
> than when mmap() is called?
Hi Guangrong,
as such mapping information between virtual mmio to physical mmio is only available
at runtime.
>
> Why the memory mapped by this mmap() is not a portion of MMIO from
> underlayer physical device? If it is a valid system memory, is this interface
> really needed to implemented in vfio? (you at least need to set VM_MIXEDMAP
> if it mixed system memory with MMIO)
>
It actually is a portion of the physical mmio which is set by vfio mmap.
> IIUC, the kernel assumes that VM_PFNMAP is a continuous memory, e.g, like
> current KVM and vaddr_get_pfn() in vfio, but it seems nvdia's patchset
> breaks this semantic as ops->validate_map_request() can adjust the physical
> address arbitrarily. (again, the name 'validate' should be changed to match
> the thing as it is really doing)
The vgpu api will allow you to adjust the target mmio address and the size via
validate_map_request, but it is still physical contiguous as <start_pfn, size>.
Thanks,
Neo
>
>
Powered by blists - more mailing lists