lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Nov 2014 12:22:55 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Matt Fleming <matt@...sole-pimps.org>
Cc:	Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Andi Kleen <andi@...stfloor.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
	Kanaka Juvva <kanaka.d.juvva@...el.com>,
	Matt Fleming <matt.fleming@...el.com>,
	Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [PATCH 09/11] perf/x86/intel: Support task events with Intel CQM

On Fri, Nov 07, 2014 at 10:09:59AM +0000, Matt Fleming wrote:
> On Fri, 07 Nov, at 10:08:04AM, Peter Zijlstra wrote:
> > 
> > How is that supposed to work? You call __intel_cqm_event_count() on the
> > one cpu per socket, but then you use a local_add, not an atomic_add,
> > even though these adds can happen concurrently as per IPI broadcast.
>  
> Ouch, right. That's broken.
> 
> > Also, I think smp_call_function_many() ignores the current cpu, if this
> > cpu happens to be the cpu for this socket, you're up some creek without
> > no paddle, right?
>  
> OK, I didn't realise that. Yeah that sounds very problematic. I think my
> eyes skipped over the word "other" in the smp_call_function_many() docs,
> 
>  * smp_call_function_many(): Run a function on a set of other CPUs.
> 
> So, the correct way to do this is to iterate over cqm_cpumask and invoke
> smp_call_function_single(), right?
> 
> > Thirdly, there is no serialization around calling perf_event_count() [or
> > your pmu::count method] so you cannot temporarily put it to 0.
> 
> Urgh, thanks. Good spot. I'm gonna have to think of a suitable
> serialisation mechanism because all the current ones are pretty
> heavy-handed. And of course, there's the added fun that it needs to be
> held across the IPIs.
> 
> Perhaps a per-cache-group mutex?

struct rmid_read {
	unsigned int rmid;
	atomic64_t value;
}

static void __remote_rmid_read(void *arg)
{
	struct rmid_read *rr = arg;
	u64 val = __rmid_read(rr->rmid);

	if (val & (RMID_VAL_ERROR | RMID_VAL_UNAVAIL))
		return;

	atomic64_add(val, &rr->value);
}

static void task_rmid_read(struct perf_event *event)
{
	struct rmid_read rr = {
		.rmid = ...,
		.value = ATOMIC64_INIT(0),
	};
	int cpu;

	...


	cpu = get_cpu();
	smp_call_function_many(&cqm_cpumask, __remote_rmid_read, &rr);
	if (cpumask_test(cpu, &cqm_cpumask)) {
		local_irq_disable();
		__remote_rmid_read(&rr);
		local_irq_enable();
	}
	put_cpu();

	local64_set(&event->count, atomic64_read(&rr->value));
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ