[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241030120453.GC6956@nvidia.com>
Date: Wed, 30 Oct 2024 09:04:53 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: David Hildenbrand <david@...hat.com>
Cc: John Hubbard <jhubbard@...dia.com>,
Alistair Popple <apopple@...dia.com>,
Christoph Hellwig <hch@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
linux-stable@...r.kernel.org,
Vivek Kasireddy <vivek.kasireddy@...el.com>,
Dave Airlie <airlied@...hat.com>, Gerd Hoffmann <kraxel@...hat.com>,
Matthew Wilcox <willy@...radead.org>, Peter Xu <peterx@...hat.com>,
Arnd Bergmann <arnd@...db.de>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Dongwon Kim <dongwon.kim@...el.com>,
Hugh Dickins <hughd@...gle.com>,
Junxiao Chang <junxiao.chang@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH] mm/gup: restore the ability to pin more than 2GB at a
time
On Wed, Oct 30, 2024 at 09:34:51AM +0100, David Hildenbrand wrote:
> The unusual thing is not the amount of system memory we are pinning but *how
> many* pages we try pinning in the single call.
>
> If you stare at vfio_pin_pages_remote, we seem to be batching it.
>
> long req_pages = min_t(long, npage, batch->capacity);
>
> Which is
>
> #define VFIO_BATCH_MAX_CAPACITY (PAGE_SIZE / sizeof(struct page *))
>
> So you can fix this in your driver ;)
Yeah, everything batches that I'm aware of. RDMA also uses a 4k batch
size, and iommufd uses 64k.
Jason
Powered by blists - more mailing lists