[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c2c02af7-1d6f-e54f-c7fb-99c5b7776014@deltatee.com>
Date: Tue, 29 Jan 2019 12:24:04 -0700
From: Logan Gunthorpe <logang@...tatee.com>
To: Jerome Glisse <jglisse@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J . Wysocki" <rafael@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Christian Koenig <christian.koenig@....com>,
Felix Kuehling <Felix.Kuehling@....com>,
Jason Gunthorpe <jgg@...lanox.com>, linux-pci@...r.kernel.org,
dri-devel@...ts.freedesktop.org, Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Joerg Roedel <jroedel@...e.de>,
iommu@...ts.linux-foundation.org
Subject: Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device
vma
On 2019-01-29 12:11 p.m., Jerome Glisse wrote:
> On Tue, Jan 29, 2019 at 11:36:29AM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2019-01-29 10:47 a.m., jglisse@...hat.com wrote:
>>
>>> + /*
>>> + * Optional for device driver that want to allow peer to peer (p2p)
>>> + * mapping of their vma (which can be back by some device memory) to
>>> + * another device.
>>> + *
>>> + * Note that the exporting device driver might not have map anything
>>> + * inside the vma for the CPU but might still want to allow a peer
>>> + * device to access the range of memory corresponding to a range in
>>> + * that vma.
>>> + *
>>> + * FOR PREDICTABILITY IF DRIVER SUCCESSFULY MAP A RANGE ONCE FOR A
>>> + * DEVICE THEN FURTHER MAPPING OF THE SAME IF THE VMA IS STILL VALID
>>> + * SHOULD ALSO BE SUCCESSFUL. Following this rule allow the importing
>>> + * device to map once during setup and report any failure at that time
>>> + * to the userspace. Further mapping of the same range might happen
>>> + * after mmu notifier invalidation over the range. The exporting device
>>> + * can use this to move things around (defrag BAR space for instance)
>>> + * or do other similar task.
>>> + *
>>> + * IMPORTER MUST OBEY mmu_notifier NOTIFICATION AND CALL p2p_unmap()
>>> + * WHEN A NOTIFIER IS CALL FOR THE RANGE ! THIS CAN HAPPEN AT ANY
>>> + * POINT IN TIME WITH NO LOCK HELD.
>>> + *
>>> + * In below function, the device argument is the importing device,
>>> + * the exporting device is the device to which the vma belongs.
>>> + */
>>> + long (*p2p_map)(struct vm_area_struct *vma,
>>> + struct device *device,
>>> + unsigned long start,
>>> + unsigned long end,
>>> + dma_addr_t *pa,
>>> + bool write);
>>> + long (*p2p_unmap)(struct vm_area_struct *vma,
>>> + struct device *device,
>>> + unsigned long start,
>>> + unsigned long end,
>>> + dma_addr_t *pa);
>>
>> I don't understand why we need new p2p_[un]map function pointers for
>> this. In subsequent patches, they never appear to be set anywhere and
>> are only called by the HMM code. I'd have expected it to be called by
>> some core VMA code and set by HMM as that's what vm_operations_struct is
>> for.
>>
>> But the code as all very confusing, hard to follow and seems to be
>> missing significant chunks. So I'm not really sure what is going on.
>
> It is set by device driver when userspace do mmap(fd) where fd comes
> from open("/dev/somedevicefile"). So it is set by device driver. HMM
> has nothing to do with this. It must be set by device driver mmap
> call back (mmap callback of struct file_operations). For this patch
> you can completely ignore all the HMM patches. Maybe posting this as
> 2 separate patchset would make it clearer.
>
> For instance see [1] for how a non HMM driver can export its memory
> by just setting those callback. Note that a proper implementation of
> this should also include some kind of driver policy on what to allow
> to map and what to not allow ... All this is driver specific in any
> way.
I'd suggest [1] should be a part of the patchset so we can actually see
a user of the stuff you're adding.
But it still doesn't explain everything as without the HMM code nothing
calls the new vm_ops. And there's still no callers for the p2p_test
functions you added. And I still don't understand why we need the new
vm_ops or who calls them and when. Why can't drivers use the existing
'fault' vm_op and call a new helper function to map p2p when appropriate
or a different helper function to map a large range in its mmap
operation? Just like regular mmap code...
Logan
Powered by blists - more mailing lists