lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1263903773.4283.657.camel@laptop>
Date:	Tue, 19 Jan 2010 13:22:53 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Stephane Eranian <eranian@...gle.com>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
	davem@...emloft.net, perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH]  perf_events: improve x86 event scheduling (v5)

On Mon, 2010-01-18 at 18:29 +0100, Frederic Weisbecker wrote:

> It has constraints that only need to be checked when we register
> the event. It has also constraint on enable time but nothing
> tricky that requires an overwritten group scheduling.

The fact that ->enable() can fail makes it a hardware counter. Software
counters cannot fail enable.

Having multiple groups of failable events (multiple hardware pmus) can
go wrong with the current core in interesting ways, look for example at
__perf_event_sched_in():

It does:

	int can_add_hw = 1;

	...

	list_for_each_entry(event, &ctx->flexible_groups, group_entry) {
		/* Ignore events in OFF or ERROR state */
		if (event->state <= PERF_EVENT_STATE_OFF)
			continue;
		/*
		 * Listen to the 'cpu' scheduling filter constraint
		 * of events:
		 */
		if (event->cpu != -1 && event->cpu != cpu)
			continue;

		if (group_can_go_on(event, cpuctx, can_add_hw))
			if (group_sched_in(event, cpuctx, ctx, cpu))
				can_add_hw = 0;
	}

Now, if you look at that logic you'll see that it assumes there's one hw
device since it only has one can_add_hw state. So if your hw_breakpoint
pmu starts to fail we'll also stop adding counters to the cpu pmu (for
lack of a better name) and vs.

This might be fixable by using per-cpu struct pmu variables. 

I'm going to try and move all the weak hw_perf_* functions into struct
pmu and create a notifier like callchain for them so we can have proper
per pmu state, and then use that to fix these things up.

However I'm afraid its far to late to push any of that into .33, which
means .33 will have rather funny behaviour once the breakpoints start
getting used.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ