lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <740924f6-e241-428d-beaa-630cad8c3e05@lankhorst.se>
Date: Thu, 4 Sep 2025 17:02:11 +0200
From: Maarten Lankhorst <dev@...khorst.se>
To: Thomas Hellström <thomas.hellstrom@...ux.intel.com>,
 David Hildenbrand <david@...hat.com>,
 Lucas De Marchi <lucas.demarchi@...el.com>,
 Rodrigo Vivi <rodrigo.vivi@...el.com>, David Airlie <airlied@...il.com>,
 Simona Vetter <simona@...ll.ch>, Maxime Ripard <mripard@...nel.org>,
 Natalie Vock <natalie.vock@....de>, Tejun Heo <tj@...nel.org>,
 Johannes Weiner <hannes@...xchg.org>, 'Michal Koutný'
 <mkoutny@...e.com>, Michal Hocko <mhocko@...nel.org>,
 Roman Gushchin <roman.gushchin@...ux.dev>,
 Shakeel Butt <shakeel.butt@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 "'Liam R . Howlett'" <Liam.Howlett@...cle.com>,
 Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
 Suren Baghdasaryan <surenb@...gle.com>,
 Thomas Zimmermann <tzimmermann@...e.de>
Cc: Michal Hocko <mhocko@...e.com>, intel-xe@...ts.freedesktop.org,
 dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
 cgroups@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC 0/3] cgroups: Add support for pinned device memory

Hey,

Den 2025-09-02 kl. 15:42, skrev Thomas Hellström:
> On Mon, 2025-09-01 at 20:38 +0200, David Hildenbrand wrote:
>> On 01.09.25 20:21, Thomas Hellström wrote:
>>> Hi,
>>>
>>> On Mon, 2025-09-01 at 20:16 +0200, Maarten Lankhorst wrote:
>>>> Hello David,
>>>>
>>>> Den 2025-09-01 kl. 14:25, skrev David Hildenbrand:
>>>>> On 19.08.25 13:49, Maarten Lankhorst wrote:
>>>>>> When exporting dma-bufs to other devices, even when it is
>>>>>> allowed
>>>>>> to use
>>>>>> move_notify in some drivers, performance will degrade
>>>>>> severely
>>>>>> when
>>>>>> eviction happens.
>>>>>>
>>>>>> A perticular example where this can happen is in a multi-card
>>>>>> setup,
>>>>>> where PCI-E peer-to-peer is used to prevent using access to
>>>>>> system memory.
>>>>>>
>>>>>> If the buffer is evicted to system memory, not only the
>>>>>> evicting
>>>>>> GPU wher
>>>>>> the buffer resided is affected, but it will also stall the
>>>>>> GPU
>>>>>> that is
>>>>>> waiting on the buffer.
>>>>>>
>>>>>> It also makes sense for long running jobs not to be preempted
>>>>>> by
>>>>>> having
>>>>>> its buffers evicted, so it will make sense to have the
>>>>>> ability to
>>>>>> pin
>>>>>> from system memory too.
>>>>>>
>>>>>> This is dependant on patches by Dave Airlie, so it's not part
>>>>>> of
>>>>>> this
>>>>>> series yet. But I'm planning on extending pinning to the
>>>>>> memory
>>>>>> cgroup
>>>>>> controller in the future to handle this case.
>>>>>>
>>>>>> Implementation details:
>>>>>>
>>>>>> For each cgroup up until the root cgroup, the 'min' limit is
>>>>>> checked
>>>>>> against currently effectively pinned value. If the value will
>>>>>> go
>>>>>> above
>>>>>> 'min', the pinning attempt is rejected.
>>>>>>
>>>>>> Pinned memory is handled slightly different and affects
>>>>>> calculating
>>>>>> effective min/low values. Pinned memory is subtracted from
>>>>>> both,
>>>>>> and needs to be added afterwards when calculating.
>>>>>
>>>>> The term "pinning" is overloaded, and frequently we refer to
>>>>> pin_user_pages() and friends.
>>>>>
>>>>> So I'm wondering if there is an alternative term to describe
>>>>> what
>>>>> you want to achieve.
>>>>>
>>>>> Is it something like "unevictable" ?
>>>> It could be required to include a call pin_user_pages(), in case
>>>> a
>>
>> We'll only care about long-term pinnings (i.e., FOLL_LONGTERM).
>> Ordinary 
>> short-term pinning is just fine.
>>
>> (see how even "pinning" is overloaded? :) )
>>
>>>> process wants to pin
>>>> from a user's address space to the gpu.
>>>>
>>>> It's not done yet, but it wouldn't surprise me if we want to
>>>> include
>>>> it in the future.
>>>> Functionally it's similar to mlock() and related functions.
>>
>> Traditionally, vfio, io_uring and rdma do exactly that: they use GUP
>> to 
>> longterm pin and then account that memory towards RLIMIT_MEMLOCK.
>>
>> If you grep for "rlimit(RLIMIT_MEMLOCK)", you'll see what I mean.
>>
>> There are known issues with that: imagine long-term pinning the same 
>> folio through GUP with 2 interfaces (e.g., vfio, io_uring, rdma), or 
>> within the same interface.
>>
>> You'd account the memory multiple times, which is horrible. And so
>> far 
>> there is no easy way out.
>>
>>>>
>>>> Perhaps call it mlocked instead?
>>>
>>> I was under the impression that mlocked() memory can be migrated to
>>> other physical memory but not to swap? whereas pinned memory needs
>>> to
>>> remain the exact same physical memory.
>>
>> Yes, exactly.
>>
>>>
>>> IMO "pinned" is pretty established within GPU drivers (dma-buf,
>>> TTM)
>>> and essentially means the same as "pin" in "pin_user_pages", so
>>> inventing a new name would probably cause even more confusion?
>>
>> If it's the same thing, absolutely. But Marteen said "It's not done
>> yet, 
>> but it wouldn't surprise me if we want to include it in the future".
>>
>> So how is the memory we are talking about in this series "pinned" ?
> 
> Reading the cover-letter from Maarten, he only talks about pinning
> affecting performance, which would be similar to user-space calling
> mlock(), although I doubt that moving content to other physical pages
> within the same memory type will be a near-term use-case.
> 
> However what's more important are situation where a device (like RDMA)
> needs to pin, because it can't handle the case where access is
> interrupted and content transferred to another physical location.
> 
> Perhaps Maarten could you elaborate whether this series is intended for
> both these use-cases?
Yeah, this is definitely for the latter case too.

It's a performance optimization for the generic case, and very nice
to have for the second case, to prevent unlimited vram pinning.
With cgroups, we would be able to limit the amounts of used memory there.

Kind regards,
~Maarten

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ