lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1507301119170.921@vshiva-Udesk>
Date:	Fri, 31 Jul 2015 16:21:19 -0700 (PDT)
From:	Vikas Shivappa <vikas.shivappa@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
cc:	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	linux-kernel@...r.kernel.org, vikas.shivappa@...el.com,
	x86@...nel.org, hpa@...or.com, tglx@...utronix.de,
	mingo@...nel.org, tj@...nel.org,
	Matt Fleming <matt.fleming@...el.com>,
	"Auld, Will" <will.auld@...el.com>,
	"Williamson, Glenn P" <glenn.p.williamson@...el.com>,
	"Juvva, Kanaka D" <kanaka.d.juvva@...el.com>
Subject: Re: [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache
 Allocation



On Wed, 29 Jul 2015, Peter Zijlstra wrote:

> On Wed, Jul 01, 2015 at 03:21:09PM -0700, Vikas Shivappa wrote:
>> +/*
>> + * cbm_update_msrs() - Updates all the existing IA32_L3_MASK_n MSRs
>> + * which are one per CLOSid except IA32_L3_MASK_0 on the current package.
>> + */
>> +static void cbm_update_msrs(void *info)
>> +{
>> +	int maxid = boot_cpu_data.x86_cache_max_closid;
>> +	unsigned int i;
>> +
>> +	/*
>> +	 * At cpureset, all bits of IA32_L3_MASK_n are set.
>> +	 * The index starts from one as there is no need
>> +	 * to update IA32_L3_MASK_0 as it belongs to root cgroup
>> +	 * whose cache mask is all 1s always.
>> +	 */
>> +	for (i = 1; i < maxid; i++) {
>> +		if (ccmap[i].clos_refcnt)
>> +			cbm_cpu_update((void *)i);
>> +	}
>> +}
>> +
>> +static inline void intel_rdt_cpu_start(int cpu)
>> +{
>> +	struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);
>> +
>> +	state->closid = 0;
>> +	mutex_lock(&rdt_group_mutex);
>> +	if (rdt_cpumask_update(cpu))
>> +		smp_call_function_single(cpu, cbm_update_msrs, NULL, 1);
>> +	mutex_unlock(&rdt_group_mutex);
>> +}
>
> If you were to guard your array with both a mutex and a raw_spinlock
> then you can avoid the IPI and use CPU_STARTING.

Cpu_online was just good enough as the tasks would be ready to be scheduled. iow 
, its just at the right time.

could avoid using the interrupt disabled time ?
Dont really need the *interrupt disabled* cpu_starting notification - can leave 
that for more important code/lock free code can go there. or this change should 
not be a big concern ?

>
>> +static int intel_rdt_cpu_notifier(struct notifier_block *nb,
>> +				  unsigned long action, void *hcpu)
>> +{
>> +	unsigned int cpu  = (unsigned long)hcpu;
>> +
>> +	switch (action) {
>> +	case CPU_DOWN_FAILED:
>> +	case CPU_ONLINE:
>> +		intel_rdt_cpu_start(cpu);
>> +		break;
>> +	case CPU_DOWN_PREPARE:
>> +		intel_rdt_cpu_exit(cpu);
>> +		break;
>> +	default:
>> +		break;
>> +	}
>> +
>> +	return NOTIFY_OK;
>>  }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ