[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6da2d821-8efa-42da-af96-232d97cb40d8@nvidia.com>
Date: Wed, 30 Oct 2024 10:25:43 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>, David Hildenbrand <david@...hat.com>
Cc: Alistair Popple <apopple@...dia.com>,
Christoph Hellwig <hch@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
linux-stable@...r.kernel.org, Vivek Kasireddy <vivek.kasireddy@...el.com>,
Dave Airlie <airlied@...hat.com>, Gerd Hoffmann <kraxel@...hat.com>,
Matthew Wilcox <willy@...radead.org>, Peter Xu <peterx@...hat.com>,
Arnd Bergmann <arnd@...db.de>, Daniel Vetter <daniel.vetter@...ll.ch>,
Dongwon Kim <dongwon.kim@...el.com>, Hugh Dickins <hughd@...gle.com>,
Junxiao Chang <junxiao.chang@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>, Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH] mm/gup: restore the ability to pin more than 2GB at a
time
On 10/30/24 5:04 AM, Jason Gunthorpe wrote:
> On Wed, Oct 30, 2024 at 09:34:51AM +0100, David Hildenbrand wrote:
>
>> The unusual thing is not the amount of system memory we are pinning but *how
>> many* pages we try pinning in the single call.
>>
>> If you stare at vfio_pin_pages_remote, we seem to be batching it.
>>
>> long req_pages = min_t(long, npage, batch->capacity);
>>
>> Which is
>>
>> #define VFIO_BATCH_MAX_CAPACITY (PAGE_SIZE / sizeof(struct page *))
>>
>> So you can fix this in your driver ;)
>
> Yeah, everything batches that I'm aware of. RDMA also uses a 4k batch
> size, and iommufd uses 64k.
>
> Jason
Yes. It's a surprise, but the driver can do that.
thanks,
--
John Hubbard
Powered by blists - more mailing lists