[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YEtUVriUSi/MFGit@google.com>
Date: Fri, 12 Mar 2021 11:45:26 +0000
From: Quentin Perret <qperret@...gle.com>
To: Will Deacon <will@...nel.org>
Cc: catalin.marinas@....com, maz@...nel.org, james.morse@....com,
julien.thierry.kdev@...il.com, suzuki.poulose@....com,
android-kvm@...gle.com, linux-kernel@...r.kernel.org,
kernel-team@...roid.com, kvmarm@...ts.cs.columbia.edu,
linux-arm-kernel@...ts.infradead.org, tabba@...gle.com,
mark.rutland@....com, dbrazdil@...gle.com, mate.toth-pal@....com,
seanjc@...gle.com, robh+dt@...nel.org, ardb@...nel.org
Subject: Re: [PATCH v4 28/34] KVM: arm64: Use page-table to track page
ownership
On Friday 12 Mar 2021 at 11:18:05 (+0000), Will Deacon wrote:
> On Fri, Mar 12, 2021 at 10:13:26AM +0000, Quentin Perret wrote:
> > On Friday 12 Mar 2021 at 09:32:06 (+0000), Will Deacon wrote:
> > > I'm not saying to use the VMID directly, just that allocating half of the
> > > pte feels a bit OTT given that the state of things after this patch series
> > > is that we're using exactly 1 bit.
> >
> > Right, and that was the reason for the PROT_NONE approach in the
> > previous version, but we agreed it'd be worth generalizing to allow for
> > future use-cases :-)
>
> Yeah, just generalising to 32 bits feels like going too far! I dunno,
> make it a u8 for now, or define the hypervisor owner ID as 1 and reject
> owners greater than that? We can easily extend it later.
Alrighty I'll do _both_
> > > > > > @@ -517,28 +543,36 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
> > > > > > if (!kvm_block_mapping_supported(addr, end, phys, level))
> > > > > > return -E2BIG;
> > > > > >
> > > > > > - new = kvm_init_valid_leaf_pte(phys, data->attr, level);
> > > > > > - if (kvm_pte_valid(old)) {
> > > > > > + if (kvm_pte_valid(data->attr))
> > > > >
> > > > > This feels like a bit of a hack to me: the 'attr' field in stage2_map_data
> > > > > is intended to correspond directly to the lower/upper attributes of the
> > > > > descriptor as per the architecture, so tagging the valid bit in there is
> > > > > pretty grotty. However, I can see the significant advantage in being able
> > > > > to re-use the stage2_map_walker functionality, so about instead of nobbling
> > > > > attr, you set phys to something invalid instead, e.g.:
> > > > >
> > > > > #define KVM_PHYS_SET_OWNER (-1ULL)
> > > >
> > > > That'll confuse kvm_block_mapping_supported() and friends I think, at
> > > > least in their current form. If you _really_ don't like this, maybe we
> > > > could have an extra 'flags' field in stage2_map_data?
> > >
> > > I was pondering this last night and I thought of two ways to do it:
> > >
> > > 1. Add a 'bool valid' and then stick the owner and the phys in a union.
> > > (yes, you'll need to update the block mapping checks to look at the
> > > valid flag)
> >
> > Right, though that is also used for the hyp s1 which doesn't use any of
> > this ATM. That shouldn't be too bad to change, I'll have a look.
>
> Oh, I meant stick the bool in the stage2_map_data so that should be limited
> to the stage2 path.
I mean I still want to use kvm_block_mapping_supported() but ignore the
phys check when it's not valid. I find it ugly to add a 'valid'
parameter to the function itself, so maybe we're better off with just
special casing phys == -1ULL as you first suggested. How much do you hate
the below (totally untested)?
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 8e4599256969..9ec937462fd6 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -71,6 +71,13 @@ static u64 kvm_granule_size(u32 level)
return BIT(kvm_granule_shift(level));
}
+#define KVM_PHYS_INVALID (-1ULL)
+
+static bool kvm_phys_is_valid(u64 phys)
+{
+ return phys != KVM_PHYS_INVALID;
+}
+
static bool kvm_level_support_block_mappings(u32 level)
{
/*
@@ -90,7 +97,10 @@ static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
if (granule > (end - addr))
return false;
- return IS_ALIGNED(addr, granule) && IS_ALIGNED(phys, granule);
+ if (kvm_phys_is_valid(phys) && !IS_ALIGNED(phys, granule))
+ return false;
+
+ return IS_ALIGNED(addr, granule);
}
static u32 kvm_pgtable_idx(struct kvm_pgtable_walk_data *data, u32 level)
@@ -550,7 +560,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
if (!kvm_block_mapping_supported(addr, end, phys, level))
return -E2BIG;
- if (kvm_pte_valid(data->attr))
+ if (kvm_phys_is_valid(phys))
new = kvm_init_valid_leaf_pte(phys, data->attr, level);
else
new = kvm_init_invalid_leaf_owner(data->owner_id);
@@ -580,7 +590,8 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
smp_store_release(ptep, new);
if (stage2_pte_is_counted(new))
mm_ops->get_page(ptep);
- data->phys += granule;
+ if (kvm_phys_is_valid(phys))
+ data->phys += granule;
return 0;
}
@@ -739,9 +750,6 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
if (ret)
return ret;
- /* Set the valid flag to distinguish with the set_owner() path. */
- map_data.attr |= KVM_PTE_VALID;
-
ret = kvm_pgtable_walk(pgt, addr, size, &walker);
dsb(ishst);
return ret;
@@ -752,6 +760,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
{
int ret;
struct stage2_map_data map_data = {
+ .phys = KVM_PHYS_INVALID,
.mmu = pgt->mmu,
.memcache = mc,
.mm_ops = pgt->mm_ops,
Powered by blists - more mailing lists