[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQypbSuMrbJpJBER@google.com>
Date: Thu, 21 Sep 2023 13:37:01 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: isaku.yamahata@...el.com
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
isaku.yamahata@...il.com, Michael Roth <michael.roth@....com>,
Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
Sagi Shahar <sagis@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Kai Huang <kai.huang@...el.com>,
Zhi Wang <zhi.wang.linux@...il.com>, chen.bo@...el.com,
linux-coco@...ts.linux.dev,
Chao Peng <chao.p.peng@...ux.intel.com>,
Ackerley Tng <ackerleytng@...gle.com>,
Vishal Annapurve <vannapurve@...gle.com>,
Yuan Yao <yuan.yao@...ux.intel.com>,
Jarkko Sakkinen <jarkko@...nel.org>,
Xu Yilun <yilun.xu@...el.com>,
Quentin Perret <qperret@...gle.com>, wei.w.wang@...el.com,
Fuad Tabba <tabba@...gle.com>
Subject: Re: [RFC PATCH v2 1/6] KVM: gmem: Truncate pages on punch hole
On Thu, Sep 21, 2023, isaku.yamahata@...el.com wrote:
> From: Isaku Yamahata <isaku.yamahata@...el.com>
>
> Although kvm_gmem_punch_hole() keeps all pages in mapping on punching hole,
> it's common expectation that pages are truncated. Truncate pages on
> punching hole. As page contents can be encrypted, avoid zeroing partial
> folio by refusing partial punch hole.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> ---
> virt/kvm/guest_mem.c | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c
> index a819367434e9..01fb4ca861d0 100644
> --- a/virt/kvm/guest_mem.c
> +++ b/virt/kvm/guest_mem.c
> @@ -130,22 +130,32 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start,
> static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len)
> {
> struct list_head *gmem_list = &inode->i_mapping->private_list;
> + struct address_space *mapping = inode->i_mapping;
> pgoff_t start = offset >> PAGE_SHIFT;
> pgoff_t end = (offset + len) >> PAGE_SHIFT;
> struct kvm_gmem *gmem;
>
> + /*
> + * punch hole may result in zeroing partial area. As pages can be
> + * encrypted, prohibit zeroing partial area.
> + */
> + if (offset & ~PAGE_MASK || len & ~PAGE_MASK)
> + return -EINVAL;
This should be unnecessary, kvm_gmem_fallocate() does
if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len))
return -EINVAL;
before invoking kvm_gmem_punch_hole(). If that's not working, i.e. your test
fails, then that code needs to be fixed. I'll run your test to double-check,
but AFAICT this is unnecesary.
> +
> /*
> * Bindings must stable across invalidation to ensure the start+end
> * are balanced.
> */
> - filemap_invalidate_lock(inode->i_mapping);
> + filemap_invalidate_lock(mapping);
>
> list_for_each_entry(gmem, gmem_list, entry) {
> kvm_gmem_invalidate_begin(gmem, start, end);
> kvm_gmem_invalidate_end(gmem, start, end);
> }
>
> - filemap_invalidate_unlock(inode->i_mapping);
> + truncate_inode_pages_range(mapping, offset, offset + len - 1);
The truncate needs to happen between begin() and end(), otherwise KVM can create
mappings to the memory between end() and truncate().
> +
> + filemap_invalidate_unlock(mapping);
>
> return 0;
> }
> --
> 2.25.1
>
Powered by blists - more mailing lists