lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <321adba9-2b02-7352-78f1-29c578d1e6c0@linux.intel.com>
Date:   Wed, 21 Jun 2017 21:31:07 +0300
From:   Alexey Budankov <alexey.budankov@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc:     Andi Kleen <ak@...ux.intel.com>, Kan Liang <kan.liang@...el.com>,
        Dmitri Prokhorov <Dmitry.Prohorov@...el.com>,
        Valery Cherepennikov <valery.cherepennikov@...el.com>,
        Mark Rutland <mark.rutland@....com>,
        David Carrillo-Cisneros <davidcc@...gle.com>,
        Stephane Eranian <eranian@...gle.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: [PATCH v4 2/4] perf/core: addressing 4x slowdown during per-process
 profiling of STREAM benchmark on Intel Xeon Phi


perf/core: use context tstamp_data for skipped events on mux interrupt

By default, the userspace perf tool opens per-cpu task-bound events
when sampling, so for N logical events requested by the user, the tool
will open N * NR_CPUS events.

In the kernel, we mux events with a hrtimer, periodically rotating the
flexible group list and trying to schedule each group in turn. We skip
groups whose cpu filter doesn't match. So when we get unlucky, we can 
walk N * (NR_CPUS - 1) groups pointlessly for each hrtimer invocation.

This has been observed to result in significant overhead when running
the STREAM benchmark on 272 core Xeon Phi systems.

One way to avoid this is to place our events into an rb tree sorted by
CPU filter, so that our hrtimer can skip to the current CPU's
list and ignore everything else.

However skipped events still require its tstamp_* fields maintained
properly on groups switching.

To implement that tstamp_data object is introduced at event context
so skipped events' tstamps would point to the object where timings are
updated only once by update_context_time() on every mux hrtimer 
interrupt. Thus iteration of skipped events can be avoided with its
tstamp_* timings properly updated still.

Signed-off-by Alexey Budankov <alexey.budankov@...ux.intel.com>
---
  include/linux/perf_event.h | 36 ++++++++++++++++++++----------
  kernel/events/core.c       | 55 
+++++++++++++++++++++++++++-------------------
  2 files changed, 57 insertions(+), 34 deletions(-)

1. separated tstamp_enabled, tstamp_running and tstamp_stopped
    fields into struct perf_event_tstamp type with equal enabled, running
    and stopped fields.

2. introduced tstamp pointer at perf_event type and tstamp_data objects
    at perf_event and perf_event_context types.

3. updated event_sched_out(), sched_in_group() and perf_event_alloc()
    to properly maintain tstamp pointer.

4. implemented context's tstamp_data times updating at
    update_context_time().

5. updated references in the code to accommodate new data layout of
    perf_event and perf_event_context objects.

6. corrected some formatting issues.

The patch was tested under perf_fuzzer and tests on Xeon Phi: 
https://github.com/deater/perf_event_tests.
No new issues were found in comparison to the clean kernel.

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index d2fb0e7..7f0cf63 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -550,6 +550,22 @@ struct pmu_event_list {
  	struct list_head	list;
  };

+struct perf_event_tstamp {
+	/*
+	 * These are timestamps used for computing total_time_enabled
+	 * and total_time_running when the event is in INACTIVE or
+	 * ACTIVE state, measured in nanoseconds from an arbitrary point
+	 * in time.
+	 * enabled: the notional time when the event was enabled
+	 * running: the notional time when the event was scheduled on
+	 * stopped: in INACTIVE state, the notional time when the
+	 *    event was scheduled off.
+	 */
+	u64 enabled;
+	u64 running;
+	u64 stopped;
+};
+
  /**
   * struct perf_event - performance event kernel representation:
   */
@@ -578,14 +594,14 @@ struct perf_event {
  	 * to the tree but to group_list list of the event directly
  	 * attached to the tree;
  	 */
-	struct rb_node            	group_node;
+	struct rb_node			group_node;
  	/*
  	 * List keeps groups allocated for the same cpu;
  	 * the list may be empty in case its event is not directly
  	 * attached to the tree but to group_list list of the event directly
  	 * attached to the tree;
  	 */
-	struct list_head 	       	group_list;
+	struct list_head		group_list;
  	/*
  	 * Entry into the group_list list above;
  	 * the entry may be attached to the self group_list list above
@@ -631,19 +647,11 @@ struct perf_event {
  	u64				total_time_running;

  	/*
-	 * These are timestamps used for computing total_time_enabled
-	 * and total_time_running when the event is in INACTIVE or
-	 * ACTIVE state, measured in nanoseconds from an arbitrary point
-	 * in time.
-	 * tstamp_enabled: the notional time when the event was enabled
-	 * tstamp_running: the notional time when the event was scheduled on
-	 * tstamp_stopped: in INACTIVE state, the notional time when the
-	 *	event was scheduled off.
+	 * tstamp points to the tstamp_data object below or to the object
+	 * located at the event context;
  	 */
-	u64				tstamp_enabled;
-	u64				tstamp_running;
-	u64				tstamp_stopped;
-
+	struct perf_event_tstamp	*tstamp;
+	struct perf_event_tstamp	tstamp_data;
  	/*
  	 * timestamp shadows the actual context timing but it can
  	 * be safely used in NMI interrupt context. It reflects the
@@ -771,7 +779,7 @@ struct perf_event_context {

  	struct list_head		active_ctx_list;
  	struct perf_event_groups	pinned_groups;
-	struct perf_event_groups        flexible_groups;
+	struct perf_event_groups	flexible_groups;
  	struct list_head		event_list;
  	int				nr_events;
  	int				nr_active;
@@ -787,6 +795,10 @@ struct perf_event_context {
  	 */
  	u64				time;
  	u64				timestamp;
+	/*
+	 * Context cache for filtered out events;
+	 */
+	struct perf_event_tstamp	tstamp_data;

  	/*
  	 * These fields let us detect when two contexts have both
diff --git a/kernel/events/core.c b/kernel/events/core.c
index fc37e30..6eb1c3f 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -865,10 +865,10 @@ perf_cgroup_mark_enabled(struct perf_event *event,

  	event->cgrp_defer_enabled = 0;

-	event->tstamp_enabled = tstamp - event->total_time_enabled;
+	event->tstamp->enabled = tstamp - event->total_time_enabled;
  	list_for_each_entry(sub, &event->sibling_list, group_entry) {
  		if (sub->state >= PERF_EVENT_STATE_INACTIVE) {
-			sub->tstamp_enabled = tstamp - sub->total_time_enabled;
+			sub->tstamp->enabled = tstamp - sub->total_time_enabled;
  			sub->cgrp_defer_enabled = 0;
  		}
  	}
@@ -1378,6 +1378,9 @@ static void update_context_time(struct 
perf_event_context *ctx)

  	ctx->time += now - ctx->timestamp;
  	ctx->timestamp = now;
+
+	ctx->tstamp_data.running += ctx->time - ctx->tstamp_data.stopped;
+	ctx->tstamp_data.stopped = ctx->time;
  }

  static u64 perf_event_time(struct perf_event *event)
@@ -1419,16 +1422,16 @@ static void update_event_times(struct perf_event 
*event)
  	else if (ctx->is_active)
  		run_end = ctx->time;
  	else
-		run_end = event->tstamp_stopped;
+		run_end = event->tstamp->stopped;

-	event->total_time_enabled = run_end - event->tstamp_enabled;
+	event->total_time_enabled = run_end - event->tstamp->enabled;

  	if (event->state == PERF_EVENT_STATE_INACTIVE)
-		run_end = event->tstamp_stopped;
+		run_end = event->tstamp->stopped;
  	else
  		run_end = perf_event_time(event);

-	event->total_time_running = run_end - event->tstamp_running;
+	event->total_time_running = run_end - event->tstamp->running;

  }

@@ -2023,9 +2026,13 @@ event_sched_out(struct perf_event *event,
  	 */
  	if (event->state == PERF_EVENT_STATE_INACTIVE &&
  	    !event_filter_match(event)) {
-		delta = tstamp - event->tstamp_stopped;
-		event->tstamp_running += delta;
-		event->tstamp_stopped = tstamp;
+		delta = tstamp - event->tstamp->stopped;
+		event->tstamp->running += delta;
+		event->tstamp->stopped = tstamp;
+		if (event->tstamp != &event->tstamp_data) {
+			event->tstamp_data = *event->tstamp;
+			event->tstamp = &event->tstamp_data;
+		}
  	}

  	if (event->state != PERF_EVENT_STATE_ACTIVE)
@@ -2033,7 +2040,7 @@ event_sched_out(struct perf_event *event,

  	perf_pmu_disable(event->pmu);

-	event->tstamp_stopped = tstamp;
+	event->tstamp->stopped = tstamp;
  	event->pmu->del(event, 0);
  	event->oncpu = -1;
  	event->state = PERF_EVENT_STATE_INACTIVE;
@@ -2324,7 +2331,7 @@ event_sched_in(struct perf_event *event,
  		goto out;
  	}

-	event->tstamp_running += tstamp - event->tstamp_stopped;
+	event->tstamp->running += tstamp - event->tstamp->stopped;

  	if (!is_software_event(event))
  		cpuctx->active_oncpu++;
@@ -2396,8 +2403,8 @@ group_sched_in(struct perf_event *group_event,
  			simulate = true;

  		if (simulate) {
-			event->tstamp_running += now - event->tstamp_stopped;
-			event->tstamp_stopped = now;
+			event->tstamp->running += now - event->tstamp->stopped;
+			event->tstamp->stopped = now;
  		} else {
  			event_sched_out(event, cpuctx, ctx);
  		}
@@ -2453,8 +2460,11 @@ sched_in_group(struct perf_event *event, struct 
perf_cpu_context *cpuctx,
  	 * Listen to the 'cpu' scheduling filter constraint
  	 * of events:
  	 */
-	if (!event_filter_match(event))
-		return;
+	if (!event_filter_match(event)) {
+		if (event->tstamp != &ctx->tstamp_data)
+			event->tstamp = &ctx->tstamp_data;
+		return ;
+	}

  	/* may need to reset tstamp_enabled */
  	if (is_cgroup_event(event))
@@ -2503,9 +2513,9 @@ static void add_event_to_ctx(struct perf_event *event,

  	list_add_event(event, ctx);
  	perf_group_attach(event);
-	event->tstamp_enabled = tstamp;
-	event->tstamp_running = tstamp;
-	event->tstamp_stopped = tstamp;
+	event->tstamp->enabled = tstamp;
+	event->tstamp->running = tstamp;
+	event->tstamp->stopped = tstamp;
  }

  static void ctx_sched_out(struct perf_event_context *ctx,
@@ -2750,10 +2760,10 @@ static void __perf_event_mark_enabled(struct 
perf_event *event)
  	u64 tstamp = perf_event_time(event);

  	event->state = PERF_EVENT_STATE_INACTIVE;
-	event->tstamp_enabled = tstamp - event->total_time_enabled;
+	event->tstamp->enabled = tstamp - event->total_time_enabled;
  	list_for_each_entry(sub, &event->sibling_list, group_entry) {
  		if (sub->state >= PERF_EVENT_STATE_INACTIVE)
-			sub->tstamp_enabled = tstamp - sub->total_time_enabled;
+			sub->tstamp->enabled = tstamp - sub->total_time_enabled;
  	}
  }

@@ -5090,8 +5100,8 @@ static void calc_timer_values(struct perf_event 
*event,

  	*now = perf_clock();
  	ctx_time = event->shadow_ctx_time + *now;
-	*enabled = ctx_time - event->tstamp_enabled;
-	*running = ctx_time - event->tstamp_running;
+	*enabled = ctx_time - event->tstamp->enabled;
+	*running = ctx_time - event->tstamp->running;
  }

  static void perf_event_init_userpage(struct perf_event *event)
@@ -9642,6 +9652,7 @@ perf_event_alloc(struct perf_event_attr *attr, int 
cpu,
  	raw_spin_lock_init(&event->addr_filters.lock);

  	atomic_long_set(&event->refcount, 1);
+	event->tstamp		= &event->tstamp_data;
  	event->cpu		= cpu;
  	event->attr		= *attr;
  	event->group_leader	= group_leader;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ