[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250805125643.GK184255@nvidia.com>
Date: Tue, 5 Aug 2025 09:56:43 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: David Hildenbrand <david@...hat.com>
Cc: Alex Williamson <alex.williamson@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"lizhe.67@...edance.com" <lizhe.67@...edance.com>
Subject: Re: [GIT PULL] VFIO updates for v6.17-rc1
On Tue, Aug 05, 2025 at 02:41:38PM +0200, David Hildenbrand wrote:
> On 05.08.25 14:38, Jason Gunthorpe wrote:
> > On Tue, Aug 05, 2025 at 02:07:49PM +0200, David Hildenbrand wrote:
> > > I don't see an easy way to guarantee that. E.g., populate_section_memmap
> > > really just does a kvmalloc_node() and
> > > __populate_section_memmap()->memmap_alloc() a memblock_alloc().
> >
> > Well, it is really easy, if you do the kvmalloc_node and you get the
> > single unwanted struct page value, then call it again and free the
> > first one. The second call is guarenteed to not return the unwanted
> > value because the first call has it allocated.
>
> So you want to walk all existing sections to check that? :)
We don't need to walk, compute the page-1 and carefully run that
through page_to_pfn algorithm.
> That's the kind of approach I would describe with the words Linus used.
Its some small boot time nastyness, we do this all the time messing up
the slow path so the fast paths can be simple
Jason
Powered by blists - more mailing lists