[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r07yp0ng.fsf@nvdebian.thelocal>
Date: Wed, 30 Oct 2024 17:18:17 +1100
From: Alistair Popple <apopple@...dia.com>
To: John Hubbard <jhubbard@...dia.com>
Cc: Christoph Hellwig <hch@...radead.org>, Andrew Morton
<akpm@...ux-foundation.org>, LKML <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, linux-stable@...r.kernel.org, Vivek Kasireddy
<vivek.kasireddy@...el.com>, David Hildenbrand <david@...hat.com>, Dave
Airlie <airlied@...hat.com>, Gerd Hoffmann <kraxel@...hat.com>, Matthew
Wilcox <willy@...radead.org>, Jason Gunthorpe <jgg@...dia.com>, Peter Xu
<peterx@...hat.com>, Arnd Bergmann <arnd@...db.de>, Daniel Vetter
<daniel.vetter@...ll.ch>, Dongwon Kim <dongwon.kim@...el.com>, Hugh
Dickins <hughd@...gle.com>, Junxiao Chang <junxiao.chang@...el.com>, Mike
Kravetz <mike.kravetz@...cle.com>, Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH] mm/gup: restore the ability to pin more than 2GB at a time
John Hubbard <jhubbard@...dia.com> writes:
> On 10/29/24 9:42 PM, Christoph Hellwig wrote:
>> On Tue, Oct 29, 2024 at 09:39:15PM -0700, John Hubbard wrote:
>>> I expect I could piece together something with Nouveau, given enough
>>> time and help from Ben Skeggs and Danillo and all...
>>>
>>> Yes, this originated with the out of tree driver. But it never occurred
>>> to me that upstream be uninterested in an obvious fix to an obvious
>>> regression.
>> Because pinning down these amounts of memoryt is completely insane.
>> I don't mind the switch to kvmalloc, but we need to put in an upper
>> bound of what can be pinned.
>
> I'm wondering though, how it is that we decide how much of the user's
> system we prevent them from using? :) People with hardware accelerators
> do not always have page fault capability, and yet these troublesome
> users insist on stacking their system full of DRAM and then pointing
> the accelerator to it.
>
> How would we choose a value? Memory sizes keep going up...
The obvious answer is you let users decide. I did have a patch series to
do that via a cgroup[1]. However I dropped that series mostly because I
couldn't find any users of such a limit to provide feedback on how they
would use it or how they wanted it to work.
- Alistair
[1] - https://lore.kernel.org/linux-mm/cover.c238416f0e82377b449846dbb2459ae9d7030c8e.1675669136.git-series.apopple@nvidia.com/
> thanks,
Powered by blists - more mailing lists