[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5210e6e6e2eb73b04cb7039084015612479ae2fe.camel@intel.com>
Date: Sun, 21 Apr 2024 01:58:07 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Yamahata,
Isaku" <isaku.yamahata@...el.com>
CC: "Zhang, Tina" <tina.zhang@...el.com>, "seanjc@...gle.com"
<seanjc@...gle.com>, "Yuan, Hang" <hang.yuan@...el.com>, "Huang, Kai"
<kai.huang@...el.com>, "Chen, Bo2" <chen.bo@...el.com>, "sagis@...gle.com"
<sagis@...gle.com>, "isaku.yamahata@...il.com" <isaku.yamahata@...il.com>,
"Aktas, Erdem" <erdemaktas@...gle.com>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "sean.j.christopherson@...el.com"
<sean.j.christopherson@...el.com>
Subject: Re: [PATCH v19 059/130] KVM: x86/tdp_mmu: Don't zap private pages for
unsupported cases
On Mon, 2024-02-26 at 00:26 -0800, isaku.yamahata@...el.com wrote:
> +/* Used by mmu notifier via kvm_unmap_gfn_range() */
> bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range
> *range,
> bool flush)
> {
> struct kvm_mmu_page *root;
> + bool zap_private = false;
> +
> + if (kvm_gfn_shared_mask(kvm)) {
> + if (!range->only_private && !range->only_shared)
> + /* attributes change */
> + zap_private = !(range->arg.attributes &
> + KVM_MEMORY_ATTRIBUTE_PRIVATE);
> + else
> + zap_private = range->only_private;
> + }
>
> __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id,
> false)
> flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end,
> - range->may_block, flush);
> + range->may_block, flush,
> + zap_private && is_private_sp(root));
>
I'm trying to apply the feedback:
- Drop MTRR support
- Changing only_private/shared to exclude_private/shared
...and update the log accordingly. These changes are all intersect in this
function and I'm having a hard time trying to justify the resulting logic.
It seems the point of passing the the attributes is because:
"However, looking at kvm_mem_attrs_changed() again, I think invoking
kvm_unmap_gfn_range() from generic KVM code is a mistake and shortsighted.
Zapping in response to *any* attribute change is very private/shared centric.
E.g. if/when we extend attributes to provide per-page RWX protections, zapping
existing SPTEs in response to granting *more* permissions may not be necessary
or even desirable."
https://lore.kernel.org/all/ZJX0hk+KpQP0KUyB@google.com/
But I think shoving the logic for how to handle the attribute changes deep into
the zapping code is the opposite extreme. It results in this confusing logic
with the decision on what to zap is spread all around.
Instead we should have kvm_arch_pre_set_memory_attributes() adjust the range so
it can tell kvm_unmap_gfn_range() which ranges to zap (private/shared).
So:
kvm_vm_set_mem_attributes() - passes attributes
kvm_arch_pre_set_memory_attributes() - chooses which private/shared alias to
zap based on attribute.
kvm_unmap_gfn_range/kvm_tdp_mmu_unmap_gfn_range - zaps the private/shared alias
tdp_mmu_zap_leafs() - doesn't care about the root type, just zaps leafs
This zapping function can then just do the simple thing it's told to do. It ends
up looking like:
bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
bool flush)
{
struct kvm_mmu_page *root;
bool exclude_private = false;
bool exclude_shared = false;
if (kvm_gfn_shared_mask(kvm)) {
exclude_private = range->exclude_private;
exclude_shared = range->exclude_shared;
}
__for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id,
false) {
if (exclude_private && is_private_sp(root))
continue;
if (exclude_shared && !is_private_sp(root))
continue;
flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end,
range->may_block, flush);
}
return flush;
}
The resulting logic should be the same. Separately, we might be able to simplify
it further if we change the behavior a bit (lose the kvm_gfn_shared_mask() check
or the exclude_shared member), but in the meantime this seems a lot easier to
explain and review for what I think is equivalent behavior.
Powered by blists - more mailing lists