lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 24 Sep 2014 09:40:10 -0700
From:	Andi Kleen <andi@...stfloor.org>
To:	Matt Fleming <matt@...sole-pimps.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
	Matt Fleming <matt.fleming@...el.com>,
	Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [PATCH 08/11] perf/x86/intel: Add Intel Cache QoS Monitoring support

Matt Fleming <matt@...sole-pimps.org> writes:
>
> diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
> index 7e1fd4e08552..8abb18fbcd13 100644
> --- a/arch/x86/kernel/cpu/Makefile
> +++ b/arch/x86/kernel/cpu/Makefile
> @@ -38,7 +38,7 @@ obj-$(CONFIG_CPU_SUP_INTEL)		+= perf_event_p6.o perf_event_knc.o perf_event_p4.o
>  obj-$(CONFIG_CPU_SUP_INTEL)		+= perf_event_intel_lbr.o perf_event_intel_ds.o perf_event_intel.o
>  obj-$(CONFIG_CPU_SUP_INTEL)		+= perf_event_intel_uncore.o perf_event_intel_uncore_snb.o
>  obj-$(CONFIG_CPU_SUP_INTEL)		+= perf_event_intel_uncore_snbep.o perf_event_intel_uncore_nhmex.o
> -obj-$(CONFIG_CPU_SUP_INTEL)		+= perf_event_intel_rapl.o
> +obj-$(CONFIG_CPU_SUP_INTEL)		+= perf_event_intel_rapl.o perf_event_intel_cqm.o

What's missing to be able to make this a module?

> +
> +	/*
> +	 * Is @cpu a designated cqm reader?
> +	 */
> +	if (!cpumask_test_and_clear_cpu(cpu, &cqm_cpumask))
> +		return;
> +
> +	for_each_online_cpu(i) {

Likely possible cpus to avoid races? Otherwise you'll need more locking.

> +		if (i == cpu)
> +			continue;
> +
> +		if (phys_id == topology_physical_package_id(i)) {
> +			cpumask_set_cpu(i, &cqm_cpumask);
> +			break;
> +		}
> +	}
> +}
> +
> +static int intel_cqm_cpu_notifier(struct notifier_block *nb,
> +				  unsigned long action, void *hcpu)
> +{
> +	unsigned int cpu  = (unsigned long)hcpu;
> +
> +	switch (action & ~CPU_TASKS_FROZEN) {
> +	case CPU_UP_PREPARE:
> +		intel_cqm_cpu_prepare(cpu);
> +		break;
> +	case CPU_DOWN_PREPARE:
> +		intel_cqm_cpu_exit(cpu);
> +		break;
> +	case CPU_STARTING:
> +		cqm_pick_event_reader(cpu);
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static int __init intel_cqm_init(void)
> +{
> +	int i, cpu, ret;
> +
> +	if (!cpu_has(&boot_cpu_data, X86_FEATURE_CQM_OCCUP_LLC))
> +		return -ENODEV;

This should use cpufeature.h

> +
> +	cqm_l3_scale = boot_cpu_data.x86_cache_occ_scale;
> +
> +	/*
> +	 * It's possible that not all resources support the same number
> +	 * of RMIDs. Instead of making scheduling much more complicated
> +	 * (where we have to match a task's RMID to a cpu that supports
> +	 * that many RMIDs) just find the minimum RMIDs supported across
> +	 * all cpus.
> +	 *
> +	 * Also, check that the scales match on all cpus.
> +	 */
> +	for_each_online_cpu(cpu) {

And this should take the cpu hotplug lock (although it may be
latent at this point if it's only running at early initializion)

But in fact what good is the test then if you only
every likely check cpu #0?

> +		struct cpuinfo_x86 *c = &cpu_data(cpu);
> +
> +		if (c->x86_cache_max_rmid < cqm_max_rmid)
> +			cqm_max_rmid = c->x86_cache_max_rmid;
> +
> +		if (c->x86_cache_occ_scale != cqm_l3_scale) {
> +			pr_err("Multiple LLC scale values, disabling\n");
> +			return -EINVAL;
> +		}
> +	}
> +
> +	ret = intel_cqm_setup_rmid_cache();
> +	if (ret)
> +		return ret;
> +
> +	for_each_online_cpu(i) {
> +		intel_cqm_cpu_prepare(i);
> +		cqm_pick_event_reader(i);
> +	}


-Andi

-- 
ak@...ux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ