lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1482931866-6018-3-git-send-email-jolsa@kernel.org>
Date:   Wed, 28 Dec 2016 14:31:04 +0100
From:   Jiri Olsa <jolsa@...nel.org>
To:     Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:     lkml <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Andi Kleen <andi@...stfloor.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Vince Weaver <vince@...ter.net>
Subject: [PATCH 2/4] perf/x86: Fix period for non sampling events

When in counting mode we setup the counter with the
longest possible period and read the value with read
syscall.

We also still setup the PMI to be triggered when such
counter overflow to reconfigure it.

We also get PEBS interrupt if such counter has precise_ip
set (which makes no sense, but it's possible).

Having such counter with:
  - counting mode
  - precise_ip set

I watched my server to get stuck serving PEBS interrupt
again and again because of following (AFAICS):

  - PEBS interrupt is triggered before PMI
  - when PEBS handling path reconfigured counter it
    had remaining value of -256
  - the x86_perf_event_set_period does not consider this
    as an extreme value, so it's configured back as the
    new counter value
  - this makes the PEBS interrupt to be triggered right
    away again
  - and because it's non sampling event, this irq storm
    is never throttled

Forcing the non sampling events to reconfigure from scratch
is probably not the best solution, but it seems to work.

Signed-off-by: Jiri Olsa <jolsa@...nel.org>
---
 arch/x86/events/core.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index f1c22584a46f..657486be9780 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1116,6 +1116,13 @@ int x86_perf_event_set_period(struct perf_event *event)
 		return 0;
 
 	/*
+	 * For non sampling event, we are not interested
+	 * in leftover, force the count from beginning.
+	 */
+	if (left && !is_sampling_event(event))
+		left = 0;
+
+	/*
 	 * If we are way outside a reasonable range then just skip forward:
 	 */
 	if (unlikely(left <= -period)) {
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ