[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230920192729.694309-2-surajjs@amazon.com>
Date: Wed, 20 Sep 2023 12:27:29 -0700
From: Suraj Jitindar Singh <surajjs@...zon.com>
To: <stable@...r.kernel.org>
CC: <james.morse@....com>, <alexandru.elisei@....com>,
<suzuki.poulose@....com>, <oliver.upton@...ux.dev>,
<catalin.marinas@....com>, <linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.linux.dev>, <kvmarm@...ts.cs.columbia.edu>,
<linux-kernel@...r.kernel.org>, <sjitindarsingh@...il.com>,
Will Deacon <will@...nel.org>,
Quentin Perret <qperret@...gle.com>,
Marc Zyngier <maz@...nel.org>,
Suraj Jitindar Singh <surajjs@...zon.com>
Subject: [PATCH stable 6.1.y 2/2] KVM: arm64: Prevent unconditional donation of unmapped regions from the host
From: Will Deacon <will@...nel.org>
commit 09cce60bddd6461a93a5bf434265a47827d1bc6f upstream.
Since host stage-2 mappings are created lazily, we cannot rely solely on
the pte in order to recover the target physical address when checking a
host-initiated memory transition as this permits donation of unmapped
regions corresponding to MMIO or "no-map" memory.
Instead of inspecting the pte, move the addr_is_allowed_memory() check
into the host callback function where it is passed the physical address
directly from the walker.
Cc: Quentin Perret <qperret@...gle.com>
Fixes: e82edcc75c4e ("KVM: arm64: Implement do_share() helper for sharing memory")
Signed-off-by: Will Deacon <will@...nel.org>
Signed-off-by: Marc Zyngier <maz@...nel.org>
Link: https://lore.kernel.org/r/20230518095844.1178-1-will@kernel.org
[ bp: s/ctx->addr/addr in __check_page_state_visitor due to missing commit
"KVM: arm64: Combine visitor arguments into a context structure"
in stable.
]
Signed-off-by: Suraj Jitindar Singh <surajjs@...zon.com>
---
arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 0f6c053686c7..0faa330a41ed 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -424,7 +424,7 @@ struct pkvm_mem_share {
struct check_walk_data {
enum pkvm_page_state desired;
- enum pkvm_page_state (*get_page_state)(kvm_pte_t pte);
+ enum pkvm_page_state (*get_page_state)(kvm_pte_t pte, u64 addr);
};
static int __check_page_state_visitor(u64 addr, u64 end, u32 level,
@@ -435,10 +435,7 @@ static int __check_page_state_visitor(u64 addr, u64 end, u32 level,
struct check_walk_data *d = arg;
kvm_pte_t pte = *ptep;
- if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte)))
- return -EINVAL;
-
- return d->get_page_state(pte) == d->desired ? 0 : -EPERM;
+ return d->get_page_state(pte, addr) == d->desired ? 0 : -EPERM;
}
static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
@@ -453,8 +450,11 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
return kvm_pgtable_walk(pgt, addr, size, &walker);
}
-static enum pkvm_page_state host_get_page_state(kvm_pte_t pte)
+static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr)
{
+ if (!addr_is_allowed_memory(addr))
+ return PKVM_NOPAGE;
+
if (!kvm_pte_valid(pte) && pte)
return PKVM_NOPAGE;
@@ -521,7 +521,7 @@ static int host_initiate_unshare(u64 *completer_addr,
return __host_set_page_state_range(addr, size, PKVM_PAGE_OWNED);
}
-static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte)
+static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr)
{
if (!kvm_pte_valid(pte))
return PKVM_NOPAGE;
--
2.34.1
Powered by blists - more mailing lists