lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87jze88xch.fsf@nvdebian.thelocal>
Date: Wed, 16 Oct 2024 15:46:28 +1100
From: Alistair Popple <apopple@...dia.com>
To: Thomas Hellström <thomas.hellstrom@...ux.intel.com>
Cc: Jason Gunthorpe <jgg@...dia.com>, intel-xe@...ts.freedesktop.org,
 Matthew Brost <matthew.brost@...el.com>, Simona Vetter
 <simona.vetter@...ll.ch>, DRI-devel <dri-devel@...ts.freedesktop.org>,
 Linux Memory Management List <linux-mm@...ck.org>, LKML
 <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] mm/hmm, mm/migrate_device: Allow p2p access and p2p
 migration


Thomas Hellström <thomas.hellstrom@...ux.intel.com> writes:

> On Tue, 2024-10-15 at 10:02 -0300, Jason Gunthorpe wrote:
>> On Tue, Oct 15, 2024 at 02:41:24PM +0200, Thomas Hellström wrote:
>> > > It has nothing to do with kernel P2P, you are just allowing more
>> > > selective filtering of dev_private_owner. You should focus on
>> > > that in
>> > > the naming, not p2p. ie allow_dev_private()
>> > > 
>> > > P2P is stuff that is dealing with MEMORY_DEVICE_PCI_P2PDMA.
>> > 
>> > Yes, although the intention was to incorporate also other fast
>> > interconnects in "P2P", not just "PCIe P2P", but I'll definitely
>> > take a
>> > look at the naming.
>> 
>> It has nothing to do with that, you are just filtering the device
>> private pages differently than default.
>> 
>> Your end use might be P2P, but at this API level it certainly is not.
>
> Sure. Will find something more suitable.
>
>> 
>> > > This is just allowing more instances of the same driver to co-
>> > > ordinate
>> > > their device private memory handle, for whatever purpose.
>> > 
>> > Exactly, or theoretically even cross-driver.
>> 
>> I don't want to see things like drivers changing their pgmap handles
>> privately somehow. If we are going to make it cross driver then it
>> needs to be generalized alot more.
>
> Cross-driver is initially not a thing, so let's worry about that later.
> My impression though is that this is the only change required for
> hmm_range_fault() and that infrastructure for opt-in and dma-mapping
> would need to be provided elsewhere?

Cross-driver is tricky because the device-private pages have no meaning
outside of the driver which owns/allocates them. One option is to have a
callback which returns P2PDMA pages which can then be dma-mapped. See
https://lore.kernel.org/linux-mm/20241015152348.3055360-1-ymaman@nvidia.com/
for an example of that.

>> 
>> > > 
>> > > Otherwise I don't see a particular problem, though we have talked
>> > > about widening the matching for device_private more broadly using
>> > > some
>> > > kind of grouping tag or something like that instead of a
>> > > callback.
>> > > You
>> > > may consider that as an alternative
>> > 
>> > Yes. Looked at that, but (if I understand you correctly) that would
>> > be
>> > the case mentioned in the commit message where the group would be
>> > set
>> > up statically at dev_pagemap creation time?
>> 
>> Not necessarily statically, but the membership would be stored in the
>> pagemap and by updated during hotplug/etc
>> 
>> If this is for P2P then the dynamic behavior is pretty limited, some
>> kind of NxN bitmap.
>> 
>> > > hmm_range struct inside a caller private data struct and use that
>> > > instead if inventing a whole new struct and pointer.
>> > 
>> > Our first attempt was based on that but then that wouldn't be
>> > reusable
>> > in the migrate_device.c code. Hence the extra indirection.
>> 
>> It is performance path, you should prefer duplication rather than
>> slowing it down..
>
> OK. Will look at duplicating.
>
> Thanks,
> Thomas
>
>
>> 
>> Jason


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ