lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue,  5 Sep 2017 16:30:22 +0300
From:   Alexander Shishkin <alexander.shishkin@...ux.intel.com>
To:     Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:     Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        acme@...hat.com, kirill.shutemov@...ux.intel.com,
        Borislav Petkov <bp@...en8.de>, rric@...nel.org,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Subject: [RFC PATCH 13/17] perf: Re-inject shmem buffers after exec

A exec will unmap everything, but we want our shmem buffers to persist.
This tells the page-pinning task work to mmap event's ring buffer after
the task exec'ed.

Signed-off-by: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
---
 kernel/events/core.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index e00f1f6aaf..f0b77b33b4 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6482,6 +6482,29 @@ static void perf_event_addr_filters_exec(struct perf_event *event, void *data)
 		perf_event_stop(event, 1);
 }
 
+static void perf_shmem_ctx_exec(struct perf_event_context *ctx)
+{
+	struct perf_event *event;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&ctx->lock, flags);
+
+	list_for_each_entry(event, &ctx->event_list, event_entry) {
+		if (event->attach_state & PERF_ATTACH_SHMEM) {
+			struct ring_buffer *rb;
+
+			/* called inside rcu read section */
+			rb = rcu_dereference(event->rb);
+			if (!rb)
+				continue;
+
+			rb->shmem_file_addr = 0;
+		}
+	}
+
+	raw_spin_unlock_irqrestore(&ctx->lock, flags);
+}
+
 void perf_event_exec(void)
 {
 	struct perf_event_context *ctx;
@@ -6497,6 +6520,7 @@ void perf_event_exec(void)
 
 		perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL,
 				   true);
+		perf_shmem_ctx_exec(ctx);
 	}
 	rcu_read_unlock();
 }
-- 
2.14.1

Powered by blists - more mailing lists