[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240729215954.fpnh4wr7jq4doblx@amd.com>
Date: Mon, 29 Jul 2024 16:59:54 -0500
From: Michael Roth <michael.roth@....com>
To: Paolo Bonzini <pbonzini@...hat.com>
CC: <linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>, <seanjc@...gle.com>
Subject: Re: [PATCH v2 14/14] KVM: guest_memfd: abstract how prepared folios
are recorded
On Fri, Jul 26, 2024 at 02:51:57PM -0400, Paolo Bonzini wrote:
> Right now, large folios are not supported in guest_memfd, and therefore the order
> used by kvm_gmem_populate() is always 0. In this scenario, using the up-to-date
> bit to track prepared-ness is nice and easy because we have one bit available
> per page.
>
> In the future, however, we might have large pages that are partially populated;
> for example, in the case of SEV-SNP, if a large page has both shared and private
> areas inside, it is necessary to populate it at a granularity that is smaller
> than that of the guest_memfd's backing store. In that case we will have
> to track preparedness at a 4K level, probably as a bitmap.
>
> In preparation for that, do not use explicitly folio_test_uptodate() and
> folio_mark_uptodate(). Return the state of the page directly from
> __kvm_gmem_get_pfn(), so that it is expected to apply to 2^N pages
> with N=*max_order. The function to mark a range as prepared for now
> takes just a folio, but is expected to take also an index and order
> (or something like that) when large pages are introduced.
>
> Thanks to Michael Roth for pointing out the issue with large pages.
>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
Reviewed-by: Michael Roth <michael.roth@....com>
Powered by blists - more mailing lists