lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200922183452.xalwog2ojsc5ogpe@google.com>
Date:   Tue, 22 Sep 2020 19:34:52 +0100
From:   David Brazdil <dbrazdil@...gle.com>
To:     Will Deacon <will@...nel.org>
Cc:     kvmarm@...ts.cs.columbia.edu,
        Catalin Marinas <catalin.marinas@....com>,
        Marc Zyngier <maz@...nel.org>,
        James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
        Christoph Lameter <cl@...ux.com>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        kernel-team@...roid.com
Subject: Re: [PATCH v3 10/11] kvm: arm64: Set up hyp percpu data for nVHE

> > -#define this_cpu_ptr_nvhe(sym)		this_cpu_ptr(&kvm_nvhe_sym(sym))
> > -#define per_cpu_ptr_nvhe(sym, cpu)	per_cpu_ptr(&kvm_nvhe_sym(sym), cpu)
> > +/* Array of percpu base addresses. Length of the array is nr_cpu_ids. */
> > +extern unsigned long *kvm_arm_hyp_percpu_base;
> > +
> > +/*
> > + * Compute pointer to a symbol defined in nVHE percpu region.
> > + * Returns NULL if percpu memory has not been allocated yet.
> > + */
> > +#define this_cpu_ptr_nvhe(sym)	per_cpu_ptr_nvhe(sym, smp_processor_id())
> 
> Don't you run into similar problems here with the pcpu accessors when
> CONFIG_DEBUG_PREEMPT=y? I'm worried we can end up with an unresolved
> reference to debug_smp_processor_id().

Fortunately not. This now doesn't use the generic macros at all.

> >  /* The VMID used in the VTTBR */
> >  static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
> > @@ -1258,6 +1259,15 @@ long kvm_arch_vm_ioctl(struct file *filp,
> >  	}
> >  }
> >  
> > +#define kvm_hyp_percpu_base(cpu)	((unsigned long)per_cpu_ptr_nvhe(__per_cpu_start, cpu))
> 
> Having both kvm_arm_hyp_percpu_base and kvm_hyp_percpu_base be so
> completely different is a recipe for disaster! Please can you rename
> one/both of them to make it clearer what they represent?

I am heavily simplifying this code in v4. Got rid of this macro completely, so
hopefully there will be no confusion.

> > -	if (!kvm_pmu_switch_needed(attr))
> > +	if (!ctx || !kvm_pmu_switch_needed(attr))
> >  		return;
> >  
> >  	if (!attr->exclude_host)
> > @@ -49,6 +49,9 @@ void kvm_clr_pmu_events(u32 clr)
> >  {
> >  	struct kvm_host_data *ctx = this_cpu_ptr_hyp(kvm_host_data);
> >  
> > +	if (!ctx)
> > +		return;
> 
> I guess this should only happen if KVM failed to initialise or something,
> right? (e.g. if we were booted at EL1). If so, I'm wondering whether it
> would be better to do something like:
> 
> 	if (!is_hyp_mode_available())
> 		return;
> 
> 	WARN_ON_ONCE(!ctx);
> 
> so that any unexpected NULL pointer there screams loudly, rather than causes
> the register switch to be silently ignored. What do you think?

Unfortunately, this happens on every boot. I don't fully understand how the
boot order is determined, so please correct me if this makes no sense, but 
kvm_clr_pmu_events is called as part of CPUHP_AP_PERF_ARM_STARTING. The first
time that happens is before KVM initialized (tested from inserting
BUG_ON(!ctx)). That's not a problem, the per-CPU memory is there and it's all
zeroes. It becomes a problem with this patch because the per-CPU memory is not
there *yet*.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ