lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 Jul 2015 17:53:02 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Vikas Shivappa <vikas.shivappa@...ux.intel.com>
Cc:	linux-kernel@...r.kernel.org, vikas.shivappa@...el.com,
	x86@...nel.org, hpa@...or.com, tglx@...utronix.de,
	mingo@...nel.org, tj@...nel.org, matt.fleming@...el.com,
	will.auld@...el.com, glenn.p.williamson@...el.com,
	kanaka.d.juvva@...el.com
Subject: Re: [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache Allocation

On Wed, Jul 01, 2015 at 03:21:09PM -0700, Vikas Shivappa wrote:
> +/*
> + * cbm_update_msrs() - Updates all the existing IA32_L3_MASK_n MSRs
> + * which are one per CLOSid except IA32_L3_MASK_0 on the current package.
> + */
> +static void cbm_update_msrs(void *info)
> +{
> +	int maxid = boot_cpu_data.x86_cache_max_closid;
> +	unsigned int i;
> +
> +	/*
> +	 * At cpureset, all bits of IA32_L3_MASK_n are set.
> +	 * The index starts from one as there is no need
> +	 * to update IA32_L3_MASK_0 as it belongs to root cgroup
> +	 * whose cache mask is all 1s always.
> +	 */
> +	for (i = 1; i < maxid; i++) {
> +		if (ccmap[i].clos_refcnt)
> +			cbm_cpu_update((void *)i);
> +	}
> +}
> +
> +static inline void intel_rdt_cpu_start(int cpu)
> +{
> +	struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);
> +
> +	state->closid = 0;
> +	mutex_lock(&rdt_group_mutex);
> +	if (rdt_cpumask_update(cpu))
> +		smp_call_function_single(cpu, cbm_update_msrs, NULL, 1);
> +	mutex_unlock(&rdt_group_mutex);
> +}

If you were to guard your array with both a mutex and a raw_spinlock
then you can avoid the IPI and use CPU_STARTING.

> +static int intel_rdt_cpu_notifier(struct notifier_block *nb,
> +				  unsigned long action, void *hcpu)
> +{
> +	unsigned int cpu  = (unsigned long)hcpu;
> +
> +	switch (action) {
> +	case CPU_DOWN_FAILED:
> +	case CPU_ONLINE:
> +		intel_rdt_cpu_start(cpu);
> +		break;
> +	case CPU_DOWN_PREPARE:
> +		intel_rdt_cpu_exit(cpu);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
>  }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ