[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33a2fd519edc917d933517842cc077a19e865e3f.camel@amazon.com>
Date: Thu, 31 Oct 2024 15:30:59 +0000
From: "Gowans, James" <jgowans@...zon.com>
To: "quic_eberman@...cinc.com" <quic_eberman@...cinc.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "rppt@...nel.org"
<rppt@...nel.org>, "brauner@...nel.org" <brauner@...nel.org>, "Graf (AWS),
Alexander" <graf@...zon.de>, "anthony.yznaga@...cle.com"
<anthony.yznaga@...cle.com>, "steven.sistare@...cle.com"
<steven.sistare@...cle.com>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "seanjc@...gle.com" <seanjc@...gle.com>,
"Woodhouse, David" <dwmw@...zon.co.uk>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "linux-mm@...ck.org" <linux-mm@...ck.org>, "Saenz
Julienne, Nicolas" <nsaenz@...zon.es>, "Durrant, Paul"
<pdurrant@...zon.co.uk>, "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"jack@...e.cz" <jack@...e.cz>, "linux-fsdevel@...r.kernel.org"
<linux-fsdevel@...r.kernel.org>, "jgg@...pe.ca" <jgg@...pe.ca>,
"usama.arif@...edance.com" <usama.arif@...edance.com>
Subject: Re: [PATCH 05/10] guestmemfs: add file mmap callback
On Tue, 2024-10-29 at 16:05 -0700, Elliot Berman wrote:
> On Mon, Aug 05, 2024 at 11:32:40AM +0200, James Gowans wrote:
> > Make the file data usable to userspace by adding mmap. That's all that
> > QEMU needs for guest RAM, so that's all be bother implementing for now.
> >
> > When mmaping the file the VMA is marked as PFNMAP to indicate that there
> > are no struct pages for the memory in this VMA. Remap_pfn_range() is
> > used to actually populate the page tables. All PTEs are pre-faulted into
> > the pgtables at mmap time so that the pgtables are usable when this
> > virtual address range is given to VFIO's MAP_DMA.
>
> Thanks for sending this out! I'm going through the series with the
> intention to see how it might fit within the existing guest_memfd work
> for pKVM/CoCo/Gunyah.
>
> It might've been mentioned in the MM alignment session -- you might be
> interested to join the guest_memfd bi-weekly call to see how we are
> overlapping [1].
>
> [1]: https://lore.kernel.org/kvm/ae794891-fe69-411a-b82e-6963b594a62a@redhat.com/T/
Hi Elliot, yes, I think that there is a lot more overlap with
guest_memfd necessary here. The idea was to extend guestmemfs at some
point to have a guest_memfd style interface, but it was pointed out at
the MM alignment call that doing so would require guestmemfs to
duplicate the API surface of guest_memfd. This is undesirable. Better
would be to have persistence implemented as a custom allocator behind a
normal guest_memfd. I'm not too sure how this would be actually done in
practice, specifically:
- how the persistent pool would be defined
- how it would be supplied to guest_memfd
- how the guest_memfds would be re-discovered after kexec
But assuming we can figure out some way to do this, I think it's a
better way to go.
I'll join the guest_memfd call shortly to see the developments there and
where persistence would fit best.
Hopefully we can figure out in theory how this could work, the I'll put
together another RFC sketching it out.
JG
Powered by blists - more mailing lists