[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190408173252.37932-1-eranian@google.com>
Date: Mon, 8 Apr 2019 10:32:50 -0700
From: Stephane Eranian <eranian@...gle.com>
To: linux-kernel@...r.kernel.org
Cc: peterz@...radead.org, tglx@...utronix.de, ak@...ux.intel.com,
kan.liang@...el.com, mingo@...e.hu, nelson.dsouza@...el.com,
jolsa@...hat.com, tonyj@...e.com
Subject: [PATCH v2 0/3] perf/x86/intel: force reschedule on TFA changes
This short patch series improves the TFA patch series by adding a
guarantee to users each time the allow_force_tsx_abort (TFA) sysctl
control knob is modified.
The current TFA support in perf_events operates as follow:
- TFA=1
The PMU has priority over TSX, if PMC3 is needed, then TSX transactions
are forced to abort. PMU has access to PMC3 and can schedule events on it.
- TFA=0
TSX has priority over PMU. If PMC3 is needed for an event, then the event
must be scheduled on another counter. PMC3 is not available.
When a sysadmin modifies TFA, the current code base does not change anything
to the events measured at the time nor the actual MSR controlling TFA. If the
kernel transitions from TFA=1 to TFA=0, nothing happens until the events are
descheduled on context switch, multiplexing or termination of measurement.
That means the TSX transactions still fail until then. There is no easy way
to evaluate how long this can take.
This patch series addresses this issue by rescheduling the events as part of the
sysctl changes. That way, there is the guarantee that no more perf_events events
are running on PMC3 by the time the write() syscall (from the echo) returns, and
that TSX transactions may succeed from then on. Similarly, when transitioning
from TFA=0 to TFA=1, the events are rescheduled and can use PMC3 immediately if
needed and TSX transactions systematically abort, by the time the write() syscall
returns.
To make this work, the patch uses an existing reschedule function in the generic
code, ctx_resched(). In V2, we export a new function called perf_ctx_resched()
which takes care of locking the contexts and invoking ctx_resched().
The patch adds a x86_get_pmu() call which is less than ideal, but I am open to
suggestions here.
In V2, we also switched from ksttoul() to kstrtobool() and added the proper
get_online_cpus()/put_online_cpus().
Signed-off-by: Stephane Eranian <eranian@...gle.com>
Stephane Eranian (2):
perf/core: add perf_ctx_resched() as global function
perf/x86/intel: force resched when TFA sysctl is modified
arch/x86/events/core.c | 4 +++
arch/x86/events/intel/core.c | 53 ++++++++++++++++++++++++++++++++++--
arch/x86/events/perf_event.h | 1 +
include/linux/perf_event.h | 14 ++++++++++
kernel/events/core.c | 18 ++++++------
5 files changed, 79 insertions(+), 11 deletions(-)
--
2.21.0.392.gf8f6787159e-goog
Powered by blists - more mailing lists