lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z1ljbPFsUdUC8AD3@google.com>
Date: Wed, 11 Dec 2024 10:03:24 +0000
From: Quentin Perret <qperret@...gle.com>
To: Fuad Tabba <tabba@...gle.com>
Cc: Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
	Joey Gouly <joey.gouly@....com>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Zenghui Yu <yuzenghui@...wei.com>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>,
	Vincent Donnefort <vdonnefort@...gle.com>,
	Sebastian Ene <sebastianene@...gle.com>,
	linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 16/18] KVM: arm64: Introduce __pkvm_tlb_flush_vmid()

On Tuesday 10 Dec 2024 at 15:23:02 (+0000), Fuad Tabba wrote:
> Hi Quentin,
> 
> On Tue, 3 Dec 2024 at 10:38, Quentin Perret <qperret@...gle.com> wrote:
> >
> > Introduce a new hypercall to flush the TLBs of non-protected guests. The
> > host kernel will be responsible for issuing this hypercall after changing
> > stage-2 permissions using the __pkvm_host_relax_guest_perms() or
> > __pkvm_host_wrprotect_guest() paths. This is left under the host's
> > responsibility for performance reasons.
> >
> > Note however that the TLB maintenance for all *unmap* operations still
> > remains entirely under the hypervisor's responsibility for security
> > reasons -- an unmapped page may be donated to another entity, so a stale
> > TLB entry could be used to leak private data.
> >
> > Signed-off-by: Quentin Perret <qperret@...gle.com>
> > ---
> >  arch/arm64/include/asm/kvm_asm.h   |  1 +
> >  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++++
> >  2 files changed, 18 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index 6178e12a0dbc..df6237d0459c 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -87,6 +87,7 @@ enum __kvm_host_smccc_func {
> >         __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm,
> >         __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load,
> >         __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put,
> > +       __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid,
> >  };
> >
> >  #define DECLARE_KVM_VHE_SYM(sym)       extern char sym[]
> > diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > index de0012a75827..219d7fb850ec 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > @@ -398,6 +398,22 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
> >         __kvm_tlb_flush_vmid(kern_hyp_va(mmu));
> >  }
> >
> > +static void handle___pkvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
> > +{
> > +       DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1);
> > +       struct pkvm_hyp_vm *hyp_vm;
> > +
> > +       if (!is_protected_kvm_enabled())
> > +               return;
> > +
> > +       hyp_vm = get_pkvm_hyp_vm(handle);
> > +       if (!hyp_vm)
> > +               return;
> > +
> > +       __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu);
> > +       put_pkvm_hyp_vm(hyp_vm);
> > +}
> 
> Since this is practically the same as kvm_tlb_flush_vmid(), does it
> make sense to modify that instead (handle___kvm_tlb_flush_vmid()) to
> do the right thing depending on whether pkvm is enabled? Thinking as
> well for the future in case we want to support the rest of the
> kvm_tlb_flush_vmid_*().

I considered it, but the two implementations want different arguments --
pkvm wants the handle while standard KVM uses the kvm struct address
directly. I had an implementation at some point that multiplexed the
implementations on a single HVC (we'd interpret the arguments
differently depending on pKVM being enabled or not) but that felt more
error prone than simply having two HVCs.

Happy to reconsider if we can find a good way to make it work though.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ