[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <249d2614-0bcc-4ca8-b24e-7c0578a81dce@nvidia.com>
Date: Tue, 29 Oct 2024 21:30:41 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
linux-stable@...r.kernel.org, Vivek Kasireddy <vivek.kasireddy@...el.com>,
David Hildenbrand <david@...hat.com>, Dave Airlie <airlied@...hat.com>,
Gerd Hoffmann <kraxel@...hat.com>, Matthew Wilcox <willy@...radead.org>,
Jason Gunthorpe <jgg@...dia.com>, Peter Xu <peterx@...hat.com>,
Arnd Bergmann <arnd@...db.de>, Daniel Vetter <daniel.vetter@...ll.ch>,
Dongwon Kim <dongwon.kim@...el.com>, Hugh Dickins <hughd@...gle.com>,
Junxiao Chang <junxiao.chang@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>, Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH] mm/gup: restore the ability to pin more than 2GB at a
time
On 10/29/24 9:21 PM, Christoph Hellwig wrote:
> On Tue, Oct 29, 2024 at 08:01:16PM -0700, John Hubbard wrote:
>> A user-visible consequence has now appeared: user space can no longer
>> pin more than 2GB of memory anymore on x86_64. That's because, on a 4KB
>> PAGE_SIZE system, when user space tries to (indirectly, via a device
>> driver that calls pin_user_pages()) pin 2GB, this requires an allocation
>> of a folio pointers array of MAX_PAGE_ORDER size, which is the limit for
>> kmalloc().
>
> Do you have a report whee someone tries to pin that much memor in a
> single call? What driver is this? Because it seems like a not very
> smart thing to do.
>
I do, yes. And what happens is that when you use GPUs, drivers like
to pin system memory, and then point the GPU page tables to that
memory. For older GPUs that don't support replayable page faults,
that's required.
So this behavior has been around forever.
The customer was qualifying their software and noticed that before
Linux 6.10, they could allocate >2GB, and with 6.11, they could
not.
Whether it is "wise" for user space to allocate that much at once
is a reasonable question, but at least one place is (or was!) doing
it.
thanks,
--
John Hubbard
Powered by blists - more mailing lists