lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-3f005e7de3db8d0b3f7a1f399aa061dc35b65864@git.kernel.org>
Date:	Wed, 10 Aug 2016 10:57:08 -0700
From:	tip-bot for Mark Rutland <tipbot@...or.com>
To:	linux-tip-commits@...r.kernel.org
Cc:	torvalds@...ux-foundation.org, acme@...nel.org, tglx@...utronix.de,
	peterz@...radead.org, hpa@...or.com, linux-kernel@...r.kernel.org,
	mark.rutland@....com, vincent.weaver@...ne.edu, mingo@...nel.org,
	jolsa@...hat.com, acme@...hat.com,
	alexander.shishkin@...ux.intel.com, eranian@...gle.com
Subject: [tip:perf/core] perf/core: Sched out groups atomically

Commit-ID:  3f005e7de3db8d0b3f7a1f399aa061dc35b65864
Gitweb:     http://git.kernel.org/tip/3f005e7de3db8d0b3f7a1f399aa061dc35b65864
Author:     Mark Rutland <mark.rutland@....com>
AuthorDate: Tue, 26 Jul 2016 18:12:21 +0100
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 10 Aug 2016 13:13:23 +0200

perf/core: Sched out groups atomically

Groups of events are supposed to be scheduled atomically, such that it
is possible to derive meaningful ratios between their values.

We take great pains to achieve this when scheduling event groups to a
PMU in group_sched_in(), calling {start,commit}_txn() (which fall back
to perf_pmu_{disable,enable}() if necessary) to provide this guarantee.
However we don't mirror this in group_sched_out(), and in some cases
events will not be scheduled out atomically.

For example, if we disable an event group with PERF_EVENT_IOC_DISABLE,
we'll cross-call __perf_event_disable() for the group leader, and will
call group_sched_out() without having first disabled the relevant PMU.
We will disable/enable the PMU around each pmu->del() call, but between
each call the PMU will be enabled and events may count.

Avoid this by explicitly disabling and enabling the PMU around event
removal in group_sched_out(), mirroring what we do in group_sched_in().

Signed-off-by: Mark Rutland <mark.rutland@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Stephane Eranian <eranian@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Vince Weaver <vincent.weaver@...ne.edu>
Link: http://lkml.kernel.org/r/1469553141-28314-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 kernel/events/core.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 1903b8f..11f6bbe 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1796,6 +1796,8 @@ group_sched_out(struct perf_event *group_event,
 	struct perf_event *event;
 	int state = group_event->state;
 
+	perf_pmu_disable(ctx->pmu);
+
 	event_sched_out(group_event, cpuctx, ctx);
 
 	/*
@@ -1804,6 +1806,8 @@ group_sched_out(struct perf_event *group_event,
 	list_for_each_entry(event, &group_event->sibling_list, group_entry)
 		event_sched_out(event, cpuctx, ctx);
 
+	perf_pmu_enable(ctx->pmu);
+
 	if (state == PERF_EVENT_STATE_ACTIVE && group_event->attr.exclusive)
 		cpuctx->exclusive = 0;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ