lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c62a1fd5-54e6-4727-d457-1b2d19b0af35@redhat.com>
Date: Thu, 30 May 2024 14:56:53 +0200 (CEST)
From: Sebastian Ott <sebott@...hat.com>
To: Eric Auger <eauger@...hat.com>
cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev, 
    linux-kernel@...r.kernel.org, Marc Zyngier <maz@...nel.org>, 
    Oliver Upton <oliver.upton@...ux.dev>, James Morse <james.morse@....com>, 
    Suzuki K Poulose <suzuki.poulose@....com>, 
    Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>
Subject: Re: [PATCH v3 3/6] KVM: arm64: add emulation for CTR_EL0 register

Hej Eric,

On Wed, 29 May 2024, Eric Auger wrote:
> On 5/14/24 09:22, Sebastian Ott wrote:
>> +static int validate_clidr_el1(u64 clidr_el1, u64 ctr_el0)
>> +{
>> +	u64 idc = !CLIDR_LOC(clidr_el1) ||
>> +		  (!CLIDR_LOUIS(clidr_el1) && !CLIDR_LOUU(clidr_el1));
> This actually computes:
> CLIDR_EL1.LoC == 0b000 or (CLIDR_EL1.LoUIS == 0b000 &&
> CLIDR_EL1.LoUU == 0b000)
>
> refering to ARM ARM
> Terminology for Clean, Invalidate, and Clean and Invalidate instructions
>
> 1) If the LoC field value is 0x0, this means that no levels of cache
> need to cleaned or invalidated
> when cleaning or invalidating to the Point of Coherency.
>
> 2) If the LoUU field value is 0x0, this means that no levels of data
> cache need to be cleaned or
> invalidated when cleaning or invalidating to the Point of Unification.
>
> 3) If the LoUIS field value is 0x0, this means that no levels of data or
> unified cache need to
> cleaned or invalidated when cleaning or invalidating to the Point of
> Unification for the Inner Shareable shareability domain.
>
> so to me if above computation is true this means who have no level of
> cache to take care of, so although CTR_EL0.IDC = 0 would normally mean
> you must "Data cache clean to the Point of Unification" that is not
> needed in that case.
>
> But the spec does not really state that IDC=0 and
> no_level_of_cache_to_clean_inv are incompatible as far as I see

This is just existing code moved to a helper..

>> +	if ((clidr_el1 & CLIDR_EL1_RES0) || (!(ctr_el0 & CTR_EL0_IDC) && idc))> +		return -EINVAL;
>
> Isn't (clidr_el1 & CLIDR_EL1_RES0) already checked by
>
>        { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1,
>          .set_user = set_clidr, .val = ~CLIDR_EL1_RES0 },
>

Nope, that would only be the case when arm64_check_features()
is used (by having set_id_reg() for the .set_user callback).

>> +static int validate_cache_top(struct kvm_vcpu *vcpu, u64 ctr_el0)
> s/top/topology?

Hm, that name is already quiet long.

>> +{
>> +	const struct sys_reg_desc *clidr_el1;
>> +	unsigned int i;
>> +	int ret;
>> +
>> +	clidr_el1 = get_sys_reg_desc(SYS_CLIDR_EL1);
>> +	if (!clidr_el1)
>> +		return -ENOENT;
>> +
>> +	ret = validate_clidr_el1(__vcpu_sys_reg(vcpu, clidr_el1->reg), ctr_el0);
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (!vcpu->arch.ccsidr)
>> +		return 0;
>> +
> worth to add a comment about what this does as this is not
> straighforward ;-)

Hm, "check for validity of the cache topology" - that's kinda the
functions name, so no added value. "Make sure the cache line size
per level obeys the minimum cache line setting" - would this help?
Can't think of smth else right now, sry. Suggestions?

>> +	for (i = 0; i < CSSELR_MAX; i++) {
>> +		if ((FIELD_GET(CCSIDR_EL1_LineSize, get_ccsidr(vcpu, i)) + 4)
> maybe use a local variable such as log2_cache_bytes
>> +		    < __get_min_cache_line_size(ctr_el0, i & CSSELR_EL1_InD))
> I don't get i & CSSELR_EL1_InD, please can you explain?

It flags the cache at this level as a data or instruction cache (see also
get_ccsidr()).

>> +static int set_ctr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>> +		   u64 val)
>> +{
> don't you need to take the config_lock earlier as in set_id_reg()? isn't
> it racy versus has_ran_once?

I was about to write that this is not the case since that's an rcu
accessed pointer not guarded by the config lock but I confused this
with the vcpu_has_run_once() .... again :-(

I'm not a 100% sure we really need that but I'll just move the lock up -
it definitely doesn't hurt.

>> +	u64 ctr = vcpu->kvm->arch.ctr_el0;
>> +	u64 writable_mask = rd->val;
>> +	int ret;
>> +
>> +	if (val == ctr)
>> +		return 0;
>> +
>> +	if (kvm_vm_has_ran_once(vcpu->kvm))> +		return -EBUSY;> +
>> +	if ((ctr & ~writable_mask) != (val & ~writable_mask))
>> +		return -EINVAL;
>> +
>> +	if (((ctr & CTR_EL0_DIC_MASK) < (val & CTR_EL0_DIC_MASK)) ||
>> +	    ((ctr & CTR_EL0_IDC_MASK) < (val & CTR_EL0_IDC_MASK)) ||
>> +	    ((ctr & CTR_EL0_DminLine_MASK) < (val & CTR_EL0_DminLine_MASK)) ||
>> +	    ((ctr & CTR_EL0_IminLine_MASK) < (val & CTR_EL0_IminLine_MASK))) {
>> +		return -EINVAL;
>> +	}
>> +
>> +	mutex_lock(&vcpu->kvm->arch.config_lock);
>> +	ret = validate_cache_top(vcpu, val);
>> +	if (ret) {
>> +		mutex_unlock(&vcpu->kvm->arch.config_lock);
>> +		return ret;
> nit use a goto out

Thanks,
Sebastian


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ