[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-cdbaf0a372db2bc3c3127e8b63fd15bd6e6757ee@git.kernel.org>
Date: Fri, 20 Jul 2018 12:37:07 -0700
From: tip-bot for Joerg Roedel <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: will.deacon@....com, acme@...nel.org, namhyung@...nel.org,
jpoimboe@...hat.com, jgross@...e.com, bp@...en8.de,
dhgutteridge@...patico.ca, pavel@....cz, David.Laight@...lab.com,
boris.ostrovsky@...cle.com, gregkh@...uxfoundation.org,
jolsa@...hat.com, brgerst@...il.com, jkosina@...e.cz,
eduval@...zon.com, aarcange@...hat.com,
torvalds@...ux-foundation.org, alexander.shishkin@...ux.intel.com,
linux-kernel@...r.kernel.org, hpa@...or.com, luto@...nel.org,
peterz@...radead.org, mingo@...nel.org, tglx@...utronix.de,
dvlasenk@...hat.com, jroedel@...e.de, llong@...hat.com,
dave.hansen@...el.com
Subject: [tip:x86/pti] perf/core: Make sure the ring-buffer is mapped in all
page-tables
Commit-ID: cdbaf0a372db2bc3c3127e8b63fd15bd6e6757ee
Gitweb: https://git.kernel.org/tip/cdbaf0a372db2bc3c3127e8b63fd15bd6e6757ee
Author: Joerg Roedel <jroedel@...e.de>
AuthorDate: Fri, 20 Jul 2018 18:22:22 +0200
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitDate: Fri, 20 Jul 2018 21:32:08 +0200
perf/core: Make sure the ring-buffer is mapped in all page-tables
The ring-buffer is accessed in the NMI handler, so it's better to avoid
faulting on it. Sync the vmalloc range with all page-tables in system to
make sure everyone has it mapped.
This fixes a WARN_ON_ONCE() that can be triggered with PTI enabled on
x86-32:
WARNING: CPU: 4 PID: 0 at arch/x86/mm/fault.c:320 vmalloc_fault+0x220/0x230
This triggers because with PTI enabled on an PAE kernel the PMDs are no
longer shared between the page-tables, so the vmalloc changes do not
propagate automatically.
Note: Andy said rightfully that we should try to fix the vmalloc code for
that case, but that's not a hot fix for the issue at hand.
Fixes: 7757d607c6b3 ("x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32")
Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: "H . Peter Anvin" <hpa@...or.com>
Cc: linux-mm@...ck.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Jiri Kosina <jkosina@...e.cz>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Cc: Brian Gerst <brgerst@...il.com>
Cc: David Laight <David.Laight@...lab.com>
Cc: Denys Vlasenko <dvlasenk@...hat.com>
Cc: Eduardo Valentin <eduval@...zon.com>
Cc: Greg KH <gregkh@...uxfoundation.org>
Cc: Will Deacon <will.deacon@....com>
Cc: aliguori@...zon.com
Cc: daniel.gruss@...k.tugraz.at
Cc: hughd@...gle.com
Cc: keescook@...gle.com
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Waiman Long <llong@...hat.com>
Cc: Pavel Machek <pavel@....cz>
Cc: "David H . Gutteridge" <dhgutteridge@...patico.ca>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Namhyung Kim <namhyung@...nel.org>
Cc: joro@...tes.org
Link: https://lkml.kernel.org/r/1532103744-31902-2-git-send-email-joro@8bytes.org
---
kernel/events/ring_buffer.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index 5d3cf407e374..7b0e9aafafdf 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -814,6 +814,9 @@ static void rb_free_work(struct work_struct *work)
vfree(base);
kfree(rb);
+
+ /* Make sure buffer is unmapped in all page-tables */
+ vmalloc_sync_all();
}
void rb_free(struct ring_buffer *rb)
@@ -840,6 +843,13 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
if (!all_buf)
goto fail_all_buf;
+ /*
+ * The buffer is accessed in NMI handlers, make sure it is
+ * mapped in all page-tables in the system so that we don't
+ * fault on the range in an NMI handler.
+ */
+ vmalloc_sync_all();
+
rb->user_page = all_buf;
rb->data_pages[0] = all_buf + PAGE_SIZE;
if (nr_pages) {
Powered by blists - more mailing lists