[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZzVVhuqZqKxNXcuT@google.com>
Date: Wed, 13 Nov 2024 17:42:30 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, michael.roth@....com
Subject: Re: [PATCH 2/3] KVM: gmem: add a complete set of functions to query
page preparedness
On Fri, Nov 08, 2024, Paolo Bonzini wrote:
> In preparation for moving preparedness out of the folio flags, pass
> the struct file* or struct inode* down to kvm_gmem_mark_prepared,
> as well as the offset within the gmem file. Introduce new functions
> to unprepare page on punch-hole, and to query the state.
...
> +static bool kvm_gmem_is_prepared(struct file *file, pgoff_t index, struct folio *folio)
> +{
> + return folio_test_uptodate(folio);
> +}
> +
> /*
> * Process @folio, which contains @gfn, so that the guest can use it.
> * The folio must be locked and the gfn must be contained in @slot.
> * On successful return the guest sees a zero page so as to avoid
> * leaking host data and the up-to-date flag is set.
> */
> -static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot,
> +static int kvm_gmem_prepare_folio(struct kvm *kvm, struct file *file,
> + struct kvm_memory_slot *slot,
> gfn_t gfn, struct folio *folio)
> {
> unsigned long nr_pages, i;
> @@ -147,7 +157,7 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot,
> index = ALIGN_DOWN(index, 1 << folio_order(folio));
> r = __kvm_gmem_prepare_folio(kvm, slot, index, folio);
> if (!r)
> - kvm_gmem_mark_prepared(folio);
> + kvm_gmem_mark_prepared(file, index, folio);
>
> return r;
> }
> @@ -231,6 +241,7 @@ static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len)
> kvm_gmem_invalidate_begin(gmem, start, end);
>
> truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1);
> + kvm_gmem_mark_range_unprepared(inode, start, end - start);
>
> list_for_each_entry(gmem, gmem_list, entry)
> kvm_gmem_invalidate_end(gmem, start, end);
> @@ -682,7 +693,7 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot,
> if (max_order)
> *max_order = 0;
>
> - *is_prepared = folio_test_uptodate(folio);
> + *is_prepared = kvm_gmem_is_prepared(file, index, folio);
> return folio;
> }
>
> @@ -704,7 +715,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> }
>
> if (!is_prepared)
> - r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio);
> + r = kvm_gmem_prepare_folio(kvm, file, slot, gfn, folio);
This is broken when the next patch comes along. If KVM encounters a partially
prepared folio, i.e. a folio with some prepared pages and some unprepared pages,
then KVM needs to zero only the unprepared pages. But kvm_gmem_prepare_folio()
zeroes everything.
static int kvm_gmem_prepare_folio(struct kvm *kvm, struct file *file,
struct kvm_memory_slot *slot,
gfn_t gfn, struct folio *folio)
{
unsigned long nr_pages, i;
pgoff_t index;
int r;
nr_pages = folio_nr_pages(folio);
for (i = 0; i < nr_pages; i++)
clear_highpage(folio_page(folio, i));
Powered by blists - more mailing lists